CPU缓存侧信道攻击综述-Survey of CPU Cache-Based Side-Channel Attacks
Survey of CPU Cache-Based Side-Channel Attacks: Systematic Analysis, Security Models, and Countermeasures
Chao Su, Qingkai Zeng, “Survey of CPU Cache-Based Side-Channel Attacks: Systematic Analysis, Security Models, and Countermeasures”, Security and Communication Networks, vol. 2021, Article ID 5559552, 15 pages, 2021. https://doi.org/10.1155/2021/5559552.
文章目錄
- Survey of CPU Cache-Based Side-Channel Attacks: Systematic Analysis, Security Models, and Countermeasures
- 1. Attack Workflow.
- (1) Define the Connection between the Victim Program and the Attacker Program.
- (2) Collect the Activities in the Cache of the Attacker’s Program While It Is Running.
- (3) Speculate(推測) on the Cache Changes of the Victim Program.
- (4) Infer the Sensitive Information of the Victim’s Program.
- 2. Example: RSA side channel attack.
- 3. Treats of the Attacks.
- 4. Analysis Model of the Side-Channel Attacks.
- 4.1 cache side-channel attack model.
- (1) Program Vulnerability.
- (2) Cache Type.
- (3) Attack Pattern.
- ① Prime + Probe.
- ② Flush + Reload.
- ③Evict + Reload.
- ④ Evict + Time.
- ⑤ Flush + Flush.
- ⑥ Invalidate + Transfer.
- (4) Range.
- 4.2 Trend of the Attacks.
- 4.3 Attack Conditions.
- 5. Analysis of the Defenses.
- 5.1 Information Independency.信息獨立性
- 5.2 Time Blinding.
- 5.3 Time Sharing. 分時
- 5.4 Resource Isolation.
- 5.5 Anti-Co-Resident Detection. 反“共駐”檢測
- 基于網絡的“共駐”檢測
- 監測緩存負載的“共駐”檢測
- 5.6 Channel Interference.信道干擾
- 6. Challenges and Trends.
- 6.1 Challenges.
- 6.2 Trends.
- (1) Strengthen security awareness of programmers, and create more effective **code review** policies in software engineering.
- (2) The diversification of obfuscation(混淆的多樣化) and identification of the cache.
- 7. Conclusion.
When the CPU tries to give access to data in the cache, it will first search in the L1 cache. If it cannot find it, it will switch to the L2 cache and so on. The hierarchy inheritance structure of the CPU cache has such a characteristic that if a certain data exists in the high-level cache (such as L1 cache), it must be found in the lower-level cache (such as L3 cache).
The inclusiveness of the CPU cache is defined as follows:
mmm denotes a piece of memory data. L1L1L1, L2L2L2, and L3L3L3 denote the contents in the L1 cache, L2 cache, and L3 cache. Then,
m∈L1?m∈L2?m∈L3m \in L1 \longrightarrow m \in L2 \longrightarrow m \in L3 m∈L1?m∈L2?m∈L3
The inclusiveness of the CPU cache also ensures that the eviction in the L3 cache leads to the eviction in L2 and L1 cache, which means
m?L3?m?L2?m?L1m \notin L3 \longrightarrow m \notin L2 \longrightarrow m\notin L1 m∈/?L3?m∈/?L2?m∈/?L1
1. Attack Workflow.
To cause information leakage, side channel attacks need to complete the following four steps.
(1) Define the Connection between the Victim Program and the Attacker Program.
展開信道攻擊的第一步是搜索可用的信道。受害者和攻擊者程序之間的連接可以指示信道的載體。在基于CPU緩存的側信道攻擊中,通道的載體是CPU緩存,這意味著需要在緩存中搜索攻擊者程序與受害程序之間的相關性。例如,Eckert等人的研究。
[19]利用了一個共享庫(OpenSSL 0.9.8n),攻擊者和受害者都調用了這個庫。在完全公平的調度設置中,它們占用了完全相同的緩存。大頁面機制也可引入“連接”。
VMware和Xen等虛擬化應用程序通常會部署大頁面來管理客戶虛擬機[20]中的物理內存。在這種情況下,攻擊者利用大頁面機制在虛擬頁面和物理頁面之間建立連接。這種連接允許攻擊通過緩存窺探其他進程的盜版數據。
(2) Collect the Activities in the Cache of the Attacker’s Program While It Is Running.
根據受害者和攻擊程序之間的連接,攻擊者會使用適當的內存讀寫模式來檢測自己的緩存狀態。在這個階段,攻擊者通常預先設置緩存的狀態。例如,通過連續的內存讀寫,攻擊者可以確保他們的目標內存被加載到緩存中。他們還使用CLFLUSH指令或其他方法來確保內容被逐出緩存。當受害進程被執行時,攻擊者將再次連續多次訪問內存。緩存的狀態可以通過訪問延遲來記錄。該步驟與受害程序同時執行。
(3) Speculate(推測) on the Cache Changes of the Victim Program.
There are usually two types of connections between cache states from the victim and attacker processes: consistency and exclusion.
The consistency connection:指攻擊者和受害者進程共享相同的緩存狀態(hit or miss),這在基于共享庫的側信道攻擊中廣泛存在[5,6,16,20 - 23]。它允許緩存外的內容由競爭對手加載,因此,它可以監視敏感信息。
Exclusion: 被攻擊者進程和被攻擊者進程相互獨占使用緩存。當其中一個試圖占用緩存時,它首先將競爭對手的內容逐出緩存,從而導致緩存狀態的改變。
(4) Infer the Sensitive Information of the Victim’s Program.
這里考慮受害程序的緩存狀態和敏感信息之間的“連接”(connection)。在緩存側信道攻擊時,需要對受害者的程序進行先驗分析(a priori analysis),使攻擊者能夠定義狀態變化與受害者敏感信息之間的關聯。
2. Example: RSA side channel attack.
當攻擊者知道受害者程序的狀態變化時,攻擊者可以推測其敏感信息,并最終導致信息泄露。完整的攻擊過程在圖2中解釋。
-
In step 1, attackers check if the attacker program and victim program co-reside(同時駐留,共存) in the same system. The co-residence ensures that there is a connection between them, and thus, they can use the same cache.
-
In step 2, define sss as the sensitive information in the victim program.
When the victim program is executed, specific state changes (defined as ppp ) displayed on the cache are related to the sensitive information sss. That is, there is a mapping:
p=f(s)p=f(s) p=f(s) -
In step 3, according to the correlation analysis of the victim program and the attacker program, define the relevance (marked as ggg) of the attacker’s cache state (marked as qqq), which means
q=g(p)q=g(p) q=g(p) -
In step 4, the process of restoring sensitive information in a side-channel attack can be expressed as
s=f?1(g?1(q))s=f^{-1}(g^{-1}(q)) s=f?1(g?1(q))
這4個步驟代表了通過基于緩存的側信道攻擊來揭示敏感信息的工作流程。
3. Treats of the Attacks.
(a) Disclosure of sensitive information such as privacy
(b) Deliver the results of malicious code execution
? Denial of service
CPU cache-based side-channel attacks can launch attacks across different domains:
- different processes
- different VMs
- different platforms
- different CPUs
- …
Its most widely used application is to leak **sensitive information **of the target program.
- the key in the encryption and decryption algorithm,
- the crucial data calculated by the program
- the user’s behavior
- the memory layout
Due to the unpredictability of the cache state, this type of sensitive information leakage is difficult to capture through pattern or feature detection, so it is concealed(隱藏).
當CPU實現錯誤預測時,它會回滾并執行實際的分支。然而,惡意指令已經被執行,并且此時指令的執行結果已通過在預定義數組上的側信道攻擊傳遞給了攻擊者。回滾不會消除對緩存的影響。
由于【預測執行】會在發現選擇了錯誤的分支后回滾結果,因此不可能通過code tracing 或 dynamic analysis發現正在執行的惡意行為。
In this case, the threat of side channel attacks lies in the delivery of malicious code running results.
由于低層緩存(如 Last Level Cache, LLC 最后一級緩存)經常被多個核共享,惡意程序也可以使用緩存側信道攻擊,使一個惡意程序保持搶占緩存,從而使其他程序無法正常使用緩存和內存。搶占共享緩存將降低受害程序的性能,導致拒絕服務。
- 有研究[24-26]表明,這種類型的攻擊可以導致性能下降95%,overhead(開銷)增加7.9X。
4. Analysis Model of the Side-Channel Attacks.
We define the model of cache side-channel attacks as
CACHE_SIDE_CHANNEL_ATK:CACHE\_SIDE\_CHANNEL\_ATK:CACHE_SIDE_CHANNEL_ATK:
?vulne,type,pattern,range?∣?vulne,type,pattern,range?|?vulne,type,pattern,range?∣
vulne∈LEAK_CODE;vulne \in LEAK\_CODE;vulne∈LEAK_CODE;
type∈CACHETYPE;type \in CACHE TYPE;type∈CACHETYPE;
pattern∈CACHEACT;pattern \in CACHE_ACT;pattern∈CACHEA?CT;
range∈{core,package,NUMA,system}.range \in \{core,package,NUMA,system\}.range∈{core,package,NUMA,system}.
LEAK_CODELEAK\_CODELEAK_CODE: refers to the vulnerable code inside the victim program that can be compromised.
- is the basis for information leakage.
- CVE-2018-0737 [27], CVE-2017-5754 [28], and CVE-2016-2178[29] expose this type of codes.
Vulnerabilities(漏洞) 可能包含不同分支的執行時間或內存訪問模式的泄漏等。
CACHE_TYPECACHE\_TYPECACHE_TYPE: the type of cache used by side-channel attacks, which are the
carrier of the channel. They can be L1-I, L1-D caches, LLC caches, or any available CPU cache. With the deepening of research, attackers tend to use more general cache than specialized cache, which widens the attack range and brings more and more threats.
CACHE_ACTCACHE\_ACTCACHE_ACT: the connection between the state change caused by the attacker and the victim processes, including mutual exclusion and consistency, as mentioned above.
- According to CACHE_ACT, the attacker organizes the order and pattern for the victim and the attacker to use the cache and times the operations for probing.
The model is illustrated in Figure 4.
According to the co-resident configuration of the attacker program and the victim program, a side-channel attack has a threat range, that is, the upper and lower bounds of the leaked confidence.
rangerangerange defines the range of sensitive information that is leaked, that is, the source and destination of the information leakage.
Table 2 lists some of the research studies about side-channel attacks indicating their development.
Trough comparison, the differences in cache side-channel attacks are mainly presented in the following aspects:
- the vulnerabilities the attack takes
- the type of cache
- the pattern of probing the cache state
- the collocation range that the attacker and the victim co-reside in
These are key elements that constitute the cache side-channel attack model.
4.1 cache side-channel attack model.
(1) Program Vulnerability.
對于受到側信道攻擊的程序,其漏洞是由敏感信息與緩存狀態變化之間的“連接”無法切斷而導致。
- 在CVE-2018-0737[27]中,during Montgomery arithmetic setup and modular exponentiation, OpenSSL調用BN_mod_inverse()函數和BN_mod_exp_mont()函數,而不設置BN_FLG_CONSTTIME標志。生成的代碼路徑不是常數時間,最終會泄露臨界GCD狀態和臨界求冪狀態(the critical GCD state and critical exponentiation state)。如圖5所示,補丁代碼添加了BN_FLG_CONSTTIME標志,使其在面對不同輸入時具有恒定的執行時間。它切斷敏感信息與緩存狀態之間的“連接”,使緩存側信道攻擊不能通過緩存泄露相應的信息。
(2) Cache Type.
CACHE_TYPECACHE\_TYPECACHE_TYPE defines the types of CPU cache used by the attacks. There are three main types of caches utilized in CPU cache-based side-channel attacks: L1-I, L2-D, and LLC, namely, CACHE_TYPE={L1?I,L1?D,LLC}CACHE\_TYPE =\{L1 ? I,L1 ? D, LLC\}CACHE_TYPE={L1?I,L1?D,LLC}.
L1- I: L1 cache 是 cache 中速度最快的部分。其容量和速度對CPU的性能有很大的影響。因此,所述高速緩存被設計成分為數據高速緩存和指令高速緩存。L1-I 是指令緩存,專門用于在一級緩存中存儲指令。內容(代碼)是否在緩存中,表示它最近是否被訪問過。
- 在側通道攻擊中,攻擊者通常使用L1-I來推斷執行路徑。研究[6,21]利用L1-I對RSA進行側信道攻擊,獲取敏感信息。
L1-D: 是存儲數據的一級緩存的一部分。與L1-I不同,L1-D的緩存狀態反映了對內存中數據的訪問。雖然不太可能讀取存儲在這些緩存中的數據的特定值,但受害者程序的數據結構和變量的訪問模式也可以用來推斷敏感數據。
- 研究[2,5,14,23]利用L1-D緩存作為側通道攻擊的載體,破解OpenSSL的RSA和AES算法,成功泄漏加解密密鑰。
LLC(Last-level cache): 最后一級緩存是CPU緩存靠近內存的部分。如表1所示,與L1-I和L2-D緩存相比,LLC緩存具有更大的訪問延時。因此,以LLC為載體的緩存側通道攻擊具有更好的魯棒性。
- 在擁有三層緩存的現代CPU中,L3緩存就是LLC。通常情況下,L3緩存可以被整個包(package)共享和同步,而L1和L2緩存是由核(core)共享的。差異導致不同的攻擊范圍。
- 針對LLC的緩存側通道攻擊可以將敏感信息泄露給共享同一個包的攻擊者,而基于L1-DandL1-I緩存的攻擊只能將敏感信息泄露給共享同一個核的攻擊者。這也解釋了為什么更多的側通道攻擊傾向于使用L3緩存進行攻擊[4,8,9,12,13,17,20,22,30 - 36,38 - 41]。
(3) Attack Pattern.
CACHEACTCACHE_ACTCACHEA?CT defines the communication mode between the attacker and the victim programs.
There are 6 attack patterns.
① Prime + Probe.
在Prime + Probe模式下,攻擊者通過檢測攻擊者程序的哪部分緩存被攻擊者逐出來推斷受害者程序的行為。
圖6顯示了單個泄漏的過程。它有三個步驟:
- 首先,在初始階段,攻擊者用自己的內容填充緩存(圖中標記為黃色)。
- 接著,它讓受害程序運行一段時間(此“一段時間”與受害程序的算法和所需要的敏感信息有關)。在此期間,受害程序將根據自己的代碼邏輯使用緩存。由于緩存沖突,攻擊者程序在相應緩存中的數據或代碼將被驅逐。這些被逐出的緩存行(cache line)裝載來自受害者的數據,在圖中用藍色標記。
- 最后,在探測階段,攻擊者再次讀取之前填充在緩存中的數據,并記錄訪問時間。
- 如果讀取時間超過預定義的閾值,則表示該緩存行中的數據已經被受害程序逐出,受害程序對應的數據或代碼在最近的時間間隔內被訪問過。
- 否則,緩存未被驅逐,即受害者沒有訪問這些緩存行。
最終,通過多次啟動和探測,完全的敏感信息被泄露。
② Flush + Reload.
Flush + Reload 模式是 Prime + Probe 的變體。
CPU緩存的多層繼承結構使其具有包容性。換言之,高級緩存(如L1)中的內容也必須存在于低級緩存(LLC)中。當內容被驅逐出高層次緩存時,低層級緩存也將驅逐相應的緩存行。基于這一事實,攻擊利用CLFLUSH指令清空緩存并發起攻擊。Flush+Reload的工作流程如圖7所示。
- 攻擊者首先使用CLFLUSH指令將受害程序的內容從所有緩存中驅逐出去。它可以確保緩存中沒有任何受害者的內容。
- 接著,它會讓受害程序運行一段時間。
- 當時間片耗盡時,攻擊者的程序重新加載緩存的內容,并根據重新加載的時間檢查緩存中是否存在內容。
- 如果訪問時間很短,這意味著受害者在前一階段訪問了這些內容。
注:Flush + Reload模式需要共享內存,以確保攻擊者和受害者可以操作相同的內存內容和相同的緩存行。
若攻擊者發現這些之前被逐出的內容出現在緩存中,這意味著它們被受害程序訪問了,這反過來揭示了受害程序的運行狀態和敏感信息。
③Evict + Reload.
Evict + Reload和Flush + Reload模式相似,主要的區別在于驅逐的方法:
- 通常在Flush+Reload模式中,攻擊者和受害程序擁有共享內存,因此可以通過CLFLUSH指令直接清除對應的緩存行。
- 如果這兩者沒有共享內存,攻擊者采用Evict + Reload模式。
Evict + Reload的攻擊工作流如圖8所示。
它加載與緩存競爭的內存(圖中的黃色塊),以驅逐緩存行。即攻擊者會找到(與當前內容不同的)可放置在同一緩存行上的內存。這樣,當攻擊者程序訪問該內存時,緩存中的受害程序對應內存的備份也會被從緩存行中驅逐出去,從而建立兩者之間的“連接”。
④ Evict + Time.
在Evict + Time中,攻擊者首先讓受害程序正常運行(該程序正常地使用緩存),并記錄時間來建立baseline(基線)。
接著,攻擊者會驅逐一些緩存行,然后讓受害程序再次運行。
通過比較受害程序的時間和基線,攻擊者可以判斷受害程序是否使用了被驅逐的緩存行。
注:Evict+Time需要多次執行。如果受害程序只啟動一次,則無法獲得準確的結果。
⑤ Flush + Flush.
Flush + Flush模式是基于CLFLUSH指令在不同的緩存狀態時執行時間不同的事實。
- 若CLFLUSH的目標存在于緩存中,則在CLFLUSH指令執行過程中需要將其從多級緩存中驅逐出去,因此執行時間較長。
- 否則,執行時間會縮短。
執行時間之間的差異允許攻擊者程序通過指令的執行時間來確定緩存行是否被受害者程序訪問過。
Flush+Flush的工作流程如圖9所示。
-
在刷新所有緩存行之后,the normal access 將其內容加載到緩存中。
-
接著,攻擊者將再次通過CLFLUSH指令清空緩存并測量其執行時間。
注:與其他模式不同,Flush+Flush利用指令執行時間而不是訪問內存,因此 it is prone to false positives and false negatives.
⑥ Invalidate + Transfer.
在具有多個處理器的平臺中,敏感信息似乎不可能在多個處理器之間泄漏,因為各個進程使用獨立的CPU緩存。
但是,Irazoqui[16]發現,為了保證緩存一致性,不同CPU的緩存數據從不同的處理器實現同步。當需要訪問的內存在一個CPU的緩存中是無效的,它可以從另一個CPU同步。這將導致一個時間差。
In the Invalidate+Transfer pattern:
-
The attacker program first evicts the shared memory from a CPU cache and then lets the victim program run.
-
If the access time consumed when the victim program accesses the memory again is lower than the threshold, it means that the memory is re-accessed, and the previously evicted cache content is synchronized back to the cache across processors.
In this way, the attacker can infer whether the target memory is accessed across different CPUs and then leak sensitive information. 通過這種方式,攻擊者可以推斷出目標內存是否被跨CPU訪問,從而泄露敏感信息。
(4) Range.
The basis of cache side-channel attacks is cache, and different types of cache have their own scope of in-
fluence.
- L1 cache is core sharing, that is, only programs executed on the same physical core can access the same L1
cache at the same time, so the scope of the attack is also limited to the core range.
We summarize the current attacks into four levels:
- System. The system sharing means resources that all processes in the system can access.
- NUMA. NUMA sharing is smaller than the system sharing. It is based on the same memory controller.
- Package. The package range is shared between objects within the same package.
- Core. Core sharing has the smallest range.
When the range is defined, attacks can only leak sensitive information from victims in the same scope (same core or package).
4.2 Trend of the Attacks.
-
More common of the cache types:專用cache到通用cache;高層次轉向低層次(L1?\longrightarrow?LLC)
-
Wider range of the attack threats
-
Looser attack requirements
-
More stable channels
4.3 Attack Conditions.
- T1. There is a corresponding mapping between the state change of the cache and the sensitive information in the program.
- T2. In the range where the corresponding cache is shared, other programs are allowed to co-reside.
- T3. The co-resident program can infer the target cache status change through its own cache status.
5. Analysis of the Defenses.
通過研究防御為何能夠發揮作用,將其分為不同的策略類型。
5.1 Information Independency.信息獨立性
T1要求操作期間敏感信息和緩存狀態之間存在相關性。信息獨立性(例如,常數時間)破壞了這種情況。它保證了目標程序的行為完全獨立于敏感數據,即緩存訪問、分支選擇等與敏感信息無關。在這種情況下,即使顯示了緩存的順序甚至程序的執行,敏感信息仍然是安全的。針對OpenSSL漏洞的補丁CVE2018-0737[27]、CVE-2018-0734[43]、CVE-2018-12438[44]、CVE-2014-0076[45]、CVE-2016-2178[29]修改了不同分支的代碼,使得程序的執行順序與密鑰無關,從而防止了相應的緩存側通道攻擊。
- Brickell[46]建議不要在超出緩存行粒度的內存上執行依賴敏感信息的訪問操作,以破壞敏感信息和內存訪問模式之間的“連接”。
Q: 什么叫做 不超過緩存行粒度的內存?
- NaCl庫[47]和libfixedtimefixpoint[48]將OpenSSL中不同分支的執行時間變成不依賴輸入的常量,可以抵御側通道攻擊。
5.2 Time Blinding.
修改攻擊者讀取的時間(CPU周期)會影響對緩存狀態的判斷。
Time blinding strategies ruin T3 to make their defenses effective.
- Virtual time:
虛擬時間隱藏了真實的訪問時間。當攻擊者請求讀取CPU周期時,系統返回一個構造出來的時間。Vattikonda等人[49]用Xen設計了一個可以提供虛擬時間的云計算環境。在這種云環境中,當攻擊者請求讀取 CPU周期時,他將得到系統給出的虛擬時鐘周期,而不是實際的時間。因此,無法推斷出受害程序緩存狀態的變化。
- Time black box:
Cock等人和Zhang等人[50,51]利用時間黑盒來減少側信道攻擊。與修改所有用于檢測時間的接口的虛擬時間不同,黑盒將整個系統視為一個整體,然后控制可以在外部測試的事件的時間。換句話說,程序的執行是一個黑盒,無法獲得每個部分的執行時間。攻擊者從外界獲得的時間是整個黑盒的時間,因此無法推斷黑盒內的執行狀態,更不用說敏感信息與狀態變化之間的聯系了。
5.3 Time Sharing. 分時
time-sharing, in data processing, method of operation in which multiple users with different programs interact nearly simultaneously with the central processing unit (CPU) of a large-scale digital computer.
Time sharing strategy realizes the defense by ruining T3.
Trough the time-sharing utilization of the cache, the threat of cache side-channel attacks can be reduced.
-
Godfrey et al.[52]提出了在不同虛擬機切換時清除所有緩存內容的想法。但會給系統帶來巨大的性能開銷。
-
Benjamin et al.[53]的實驗中,清除L1緩存的內容會導致17%的性能損失。如果每次切換VM時都將LLC從緩存中清除,則性能開銷將更大。
5.4 Resource Isolation.
Different from the time-sharing strategy in which shared resources are provided to the victim and attacker programs, the resource division strategy divides the resources for use by different programs.
That is, different resources are no longer shared by multiple programs. They are now exclusive. The basis of the resource isolation strategy is to reduce the shared area.
For example, turning off the hyperthreading in hardware reduces the resource sharing caused by symmetric multithreading (SMT) and prevents different threads from accessing the cache.
In fact, most cloud service providers, such as Microsoft’s Azure [54], turned off SMT to gain higher security guarantees. VMWare [53] suggests turning off the page sharing feature in the configuration to resist cross-virtual machine side-channel attacks.
與向受害者和攻擊者程序提供共享資源的Timing Sharing 不同,Resource Isolation 策略將資源劃分給不同的程序使用。也就是說,不同的資源不再由多個程序共享。資源隔離策略的基礎是減少共享區域。
例如,關閉硬件中的超線程可以減少SMT引起的資源共享,防止不同線程訪問緩存。事實上,大多數云服務提供商,如微軟的Azure[54],關閉了SMT以獲得更高的安全保障。VMWare[53]建議在配置中關閉頁面共享功能,以抵御跨虛擬機的側面攻擊。
Resource isolation strategies are applied to CPU caches as well, which are mainly divided into hardware isolation
and software isolation.
Hardware isolation aims to use a hardware mechanism to ensure that the cache line is exclusively occupied by the victim program, so the attacker program cannot use its own cache performance to infer the changes in the victim program.
- It ruins T3 to counter side- channel attacks.
- Cache Allocation Technology (CAT) provided by Intel can prevent specific cache lines in the LLC from being replaced. CATalyst[55]manages the use of cache through CAT. Its experiments prove that it shows effective defense against LLC-based side-channel attacks in cloud computing environments.
- Partition-locked cache (PLcache)[56] adds a locking attribute to the use of the cache, which isolates the use of cache lines. It destroys T3 by reducing the conflict between the victim and the attacker on the cache line.
The software isolation of resources mainly uses the method of cache coloring.
The cache coloring algorithm was originally proposed to improve the cache replacement efficiency, reducing conflicts and increasing the hit rate for higher system performance.
Cache coloring divides the cache into different color blocks and indexes the color blocks that should be stored according to the coloring bits. When the cache group is mapped, the high part of the index is used as the coloring bits to determine the corresponding cache line. If the physical address (PA) is used as the index of the cache, the coloring bits are not only the high part of the set index but also the low part of the frame number during memory addressing. This ensures that if the memory belongs to different pages, it will be allocated to different color blocks.
Q: cache coloring???
5.5 Anti-Co-Resident Detection. 反“共駐”檢測
[15,58]提出了一種利用隱蔽信道跨VMM進行“共駐”信息交換的方法。但是它需要攻擊者和受害者都處于完全控制之下,所以它只能用于概念證明,這意味著它不會在實際威脅中被利用。
攻擊者可以通過占用硬件資源進行共駐檢測。當攻擊者與被攻擊程序處于共駐狀態時,不可避免地會發生資源搶占和調度,從而使攻擊者有能力在平臺下搜索可攻擊對象。
基于網絡的“共駐”檢測
A. 在Adam et al.[59]的工作中,網絡用于確定兩個虛擬機是否在同一物理主機下。
- 攻擊者通過不斷發送網絡報文來占用物理網卡,使得其與目標虛擬機之間的網絡通信延遲增大。
- 接著,它推斷目標是否和它在同一個物理主機上。這樣,就完成了共駐檢測。
B. 攻擊者還可以通過網絡信息找到共同駐留的目標(即,可以攻擊的對象)。
主機平臺通常為虛擬機分配相同或類似的IP地址。根據IP信息,Ristenpart et al.[15]成功地在亞馬遜彈性計算云服務器上執行了共駐檢測。**基于網絡信息的共駐檢測可以通過軟件更新進行修復。**目前的Amazon EC2服務器已經配置了網絡,網絡信息不能再用于共駐檢測。
最常用的方法是利用網絡信息來檢測存在于同一物理機中的其他虛擬機。這很簡單,但很有效。
共駐檢測是跨虛擬機側通道攻擊的一個重要步驟。通過優化云服務上的虛擬機隔離和硬件資源管理,可防止攻擊者通過硬件性能的變化來推斷共駐信息。另外,通過修改網絡配置,將共享信息隱藏在云服務中。
This destroys T2 and makes cross-VM side-channel attacks impossible to proceed normally.
監測緩存負載的“共駐”檢測
Zhang 等人[60]則相反,通過統計緩存負載來推斷相應環境中是否存在其他虛擬機。Si Yu等[61]后來對該方法的建模進行了進一步的抽象,提高了建模的精度。
==反“共駐”檢測主要用于云服務器的防御。==在本地側通道攻擊中,攻擊者已經對受害者的程序有很強的先驗知識,并確認受害者的程序已被執行。攻擊者可能擁有源代碼或二進制代碼,甚至能夠多次執行受害者的程序。在這種情況下,反“共駐”檢測策略可能會失敗。
5.6 Channel Interference.信道干擾
一個穩定、可用的信道是進行側信道攻擊的實際基礎。
向緩存中注入噪聲會干擾信道的使用。Hu[62]設計了fuzzy time,將噪聲注入到各種事件中。
同樣,Brickell等[63]對AES加解密時的查找表進行壓縮、重載和隨機化,以防御緩存側通道攻擊。
對于一般的加解密算法,這些操作引起的緩存變化都是噪聲。這些注入的噪聲可以干擾側通道,使攻擊者程序無法使用緩存來猜測受害程序如何使用緩存,從而破壞T3。
干擾信道的技術有很多:
- RPcache [56] (random permutation cache)使用緩存隨機索引的方法引入額外的熵,防止攻擊者的程序搶占和猜測緩存。
- Vattikonda[49]使用Hypervisor 在事件的時間度量中插入噪音。
- Zhang等[64]建立了一個 bystander virtual machine (旁觀者虛擬機),來將噪聲注入到整個虛擬機平臺的L2緩存中。實驗表明, 旁觀者虛擬機引入的噪聲可以有效地干擾信道。
然而,信道干擾并不是完美的防御策略。對于高安全性的要求,破解信道干擾只是一個關于復雜性和時間的游戲。
為了增強信道,攻擊者提出了不同的對策:使用error correction code(糾錯碼)或 SSH protocol 來增強 Buffer 信道的魯棒性[65]。
這些增強措施可以繞過如注入噪聲之類的信道干擾防御策略。隨著對側信道攻擊研究的深入,出現了更多的攻擊形式和變體。但是,所有提出的新攻擊方法仍然在第3節提到的模型中。因此,用于應對的防御仍未與T1、T2、T3分離。
6. Challenges and Trends.
6.1 Challenges.
a. Different from common vulnerabilities, it links sensitive information and state changes, which is unexpected for developers, resulting in compromising of privacy.
與常見的漏洞不同,它鏈接敏感信息和狀態變化,這是開發人員意想不到的,導致隱私受到損害。側信道攻擊以CPU緩存為載體,在開發過程中很少受到重視。然而,由于CPU緩存在現代計算機體系結構中的重要性,無法擺脫其帶來的性能提升,因此基于CPU緩存的側信道攻擊。
b. Considering efficiency and performance, environments (especially cloud computing service platforms) are unwilling to completely isolate hardware resources.
云計算的未來是共享和重用。不可能部署大量的專用資源。
c. Artificial Intelligence helps to recognize the pattern of cache activities.
人工智能有助于識別緩存活動的模式。人工智能算法降低了連接敏感信息和緩存狀態的難度,從而使通道更加可用。
6.2 Trends.
(1) Strengthen security awareness of programmers, and create more effective code review policies in software engineering.
Q: 代碼審查能解決?
(2) The diversification of obfuscation(混淆的多樣化) and identification of the cache.
- Oblivious Random Access Machine(ORAM)
ORAM參考資料:An Introduction to Oblivious RAM (ORAM) – Kudelski Security Research
7. Conclusion.
-
Focus on side-channel attacks that are based on CPU caches.
-
By comparing different types of attacks, summarize the general workflow of the attack
-
propose a model that distinguishes attacks from four perspectives:
- vulnerability
- cache type
- pattern
- range
-
Defense strategies: classified into 6 types, according to how they take effect in the actual world.
-
the attack conditions indicate the basis of the defense strategies.
Interesting directions to follow will be invalidating these conditions and proposing more effective defenses.
總結
以上是生活随笔為你收集整理的CPU缓存侧信道攻击综述-Survey of CPU Cache-Based Side-Channel Attacks的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: java 中间件介绍_java中间件有哪
- 下一篇: 现金支票打印模板excel_Word如何