Pretexting

From HandWiki
Short description: Type of social engineering attack

Pretexting is a type of social engineering attack that involves a situation, or pretext, created by an attacker in order to lure a victim into a vulnerable situation and to trick them into giving private information, specifically information that the victim would typically not give outside the context of the pretext.[1] In its history, pretexting has been described as the first stage of social engineering, and has been used by the FBI to aid in investigations.[2] A specific example of pretexting is reverse social engineering, in which the attacker tricks the victim into contacting the attacker first.

A reason for pretexting's prevalence among social engineering attacks is its reliance on manipulating the human mind in order to gain access to the information the attacker wants, versus having to hack a technological system. When looking for victims, attackers can watch out for a variety of characteristics, such as ability to trust, low perception of threat, response to authority, and susceptibility to react with fear or excitement in different situations.[3][4] Throughout history, pretexting attacks have increased in complexity, having evolved from manipulating operators over the phone in the 1900s to the Hewlett Packard scandal in the 2000s, which involved the use of social security numbers, phones, and banks.[5] Current education frameworks on social engineering are used in organizations, although researchers in academia have suggested possible improvements to those frameworks.[6]

Background

Social engineering

Social engineering is a psychological manipulation tactic that leads to the unwilling or unknowing response of the target/victim.[7] It is one of the top information security threats in the modern world, affecting organizations, business management, and industries.[7] Social engineering attacks are considered difficult to prevent due to its root in psychological manipulation.[8] These attacks can also reach a broader scale. In other security attacks, a company that holds customer data might be breached. With social engineering attacks, both the company (specifically workers within the company) and the customer directly are susceptible to being targeted.[8]

An example would be in the banking industry, where not only bank employees can be attacked, but the customers as well. Social engineering culprits directly target customers and/or employees to work around trying to hack a purely technological system and exploit human vulnerabilities.[8]

Though its definition in relation to cybersecurity has been skewed across different literature, a common theme is that social engineering (in cybersecurity) exploits human vulnerabilities in order to breach entities such as computers and information technology.[2]

Social engineering has little literature and research done on it currently. However, a main part of the methodology when researching social engineering is to set up a made-up pretext. When assessing which social engineering attacks are the most dangerous or harmful, (ie. phishing, vishing, water-holing), the type of pretext is a largely insignificant factor, seeing as some attacks can have multiple pretexts. Thus, pretexting itself is widely used, not just as its own attack, but as a component of others.[9]

Pretexting in the timeline of social engineering

In cybersecurity, pretexting can be considered one of the earliest stages of evolution for social engineering. For example, while the social engineering attack known as phishing relies on modern items such as credit cards and mainly occurs in the electronic space, pretexting was and can be implemented without technology.[10]

Pretexting was one of the first examples of social engineering. Coined by the FBI in 1974, the concept of pretexting was often used to help in their investigations. In this phase, pretexting consisted of an attacker calling the victim simply asking for information.[2] Pretexting attacks usually consist of persuasion tactics. After this beginning phase of social engineering's evolution (1974-1983), pretexting changed from not only persuasion tactics, but deception tactics as well. As technology developed, pretexting methods developed alongside it. Soon, hackers had access to a wider audience of victims due to the invention of social media.[2]

Reverse social engineering

Reverse social engineering is a more specific example of pretexting.[11] It is a non-electronic form of social engineering where the attacker creates a pretext where the user is manipulated into contacting the attacker first, versus the other way around.

Typically, reverse engineering attacks involve the attacker advertising their services as a type of technical aid, establishing credibility. Then, the victim is tricked into contacting the attacker after seeing advertisements, without the attacker directly contacting the victim in the first place. Once an attacker successfully accomplishes a reverse social engineering attack, then a wide range of social engineering attacks can be established due to the falsified trust between the attacker and the victim (for example, the attacker can give the victim a harmful link and say that it is a solution to the victim's problem. Due to the connection between the attacker and the victim, the victim will be inclined to believe the attacker and click on the harmful link).[12]

Social aspect

Pretexting was and continues to be seen as a useful tactic in social engineering attacks. According to researchers, this is because they don't rely on technology (such as hacking into computer systems or breaching technology). Pretexting can occur online, but it is more reliant on the user and the aspects of their personality the attacker can utilize to their advantage.[13] Attacks that are more reliant on the user are harder to track and control, as each person responds to social engineering and pretexting attacks differently. Directly attacking a computer, however, can take less effort to solve, since computers relatively work in similar ways.[13] There are certain characteristics of users that attackers pinpoint and target. In academia, some common characteristics[14] are:

Prized

If the victim is "prized", it means that he/she has some type of information that the social engineer desires.[3]

Ability to trust

Trustworthiness goes along with likability, as typically the more someone is liked, the more they are trusted.[14] Similarly, when trust is established between the social engineer (the attacker) and the victim, credibility is also established. Thus, it is easier for the victim to divulge personal information to the attacker if the victim is more easily able to trust.[4]

Susceptibility to react

How easily a person reacts to events and to what degree can be used in a social engineer's favor. Particularly, emotions like excitement and fear are often used to persuade people to divulge information. For example, a pretext could be established wherein the social engineer teases an exciting prize for the victim if they agree to give the social engineer their banking information. The feeling of excitement can be used to lure the victim into the pretext and persuade them to give the attacker the information being sought after.[14]

Low perception of threat

Despite understanding that threats exist when doing anything online, most people will perform actions that are against this, such as clicking on random links or accepting unknown friend requests.[14] This is due to a person perceiving the action as having a low threat or negative consequence. This lack of fear/threat, despite an awareness of its presence, is another reason why social engineering attacks, especially pretexting, are prevalent.[15]

Response to authority

If the victim is submissive and compliant, then an attacker is more likely to be successful in the attack if a pretext is set where the victim thinks the attacker is posed as some type of authoritative figure.[14]

Examples

Early pretexting (1970–80s)

The October 1984 article Switching centres and Operators detailed a common pretexting attack at the time. Attackers would often contact operators who specifically operated for deaf people using Teletypewriters. The logic was that these operators were often more patient than regular operators, so it was easier to manipulate and persuade them for the information the attacker desired.[2]

Recent examples

A notable example is the Hewlett Packard scandal. The company Hewlett Packard wanted to know who was leaking out information to journalists. In order to do so, they provided private investigators with employees' personal information (such as social security numbers), and the private investigators in turn called phone companies impersonating those employees in hopes of obtaining call records. When the scandal was discovered, the CEO resigned.[16]

In general, socialbots are machine-operated fake social media profiles employed by social engineering attackers. On social media sites like Facebook, socialbots can be used to send mass friend requests in order to find as many potential victims as possible.[5] Using reverse social engineering techniques, attackers can use socialbots to gain massive amounts of private information on many social media users.[17] In 2018, a fraudster impersonated entrepreneur Elon Musk on Twitter, altering their name and profile picture. They proceeded to initiate a deceptive giveaway scam, promising to multiply the cryptocurrency sent by users. Subsequently, the scammer retained the funds sent to them. This incident serves as an example of how pretexting was employed as a tactic in a social engineering attack.[18]

Current education frameworks

Current education frameworks on the topic of social engineering fall in between two categories: awareness and training. Awareness is when the information about social engineering is presented to the intended party to inform them about the topic. Training is specifically teaching necessary skills that people will learn and use in case they are in a social engineering attack or can encounter one.[6] Awareness and training can be combined into one intensive process when constructing education frameworks.

While research has been done on the successfulness and necessity of training programs in the context of cybersecurity education,[19] up to 70% of information can be lost when it comes to social engineering training.[20] A research study on social engineering education in banks across the Asian Pacific, it was found that most frameworks only touched upon either awareness or training. Also, the only type of social engineering attack that was taught was phishing. By looking at and comparing the security policies on these banks' websites, the policies contain generalized language such as "malware" and "scams", while also missing the details behind the different types of social engineering attacks and examples of each one of those types.[6]

This generalization does not benefit the users being educated by these frameworks, as there is considerable depth missing when the user is only educated on broad terms like the examples above. As well, purely technical methods of combatting against social engineering and pretexting attacks, such as firewalls and antiviruses, are ineffective. This is because social engineering attacks typically involve exploiting the social characteristic of human nature, thus purely combatting technology is ineffective.[21]

See also

References

  1. Greitzer, F. L.; Strozer, J. R.; Cohen, S.; Moore, A. P.; Mundie, D.; Cowley, J. (May 2014). "Analysis of Unintentional Insider Threats Deriving from Social Engineering Exploits". 2014 IEEE Security and Privacy Workshops. pp. 236–250. doi:10.1109/SPW.2014.39. ISBN 978-1-4799-5103-1. https://ieeexplore.ieee.org/document/6957309. 
  2. 2.0 2.1 2.2 2.3 2.4 Wang, Zuoguang; Sun, Limin; Zhu, Hongsong (2020). "Defining Social Engineering in Cybersecurity". IEEE Access 8: 85094–85115. doi:10.1109/ACCESS.2020.2992807. ISSN 2169-3536. 
  3. 3.0 3.1 Steinmetz, Kevin F. (2020-09-07). "The Identification of a Model Victim for Social Engineering: A Qualitative Analysis" (in en). Victims & Offenders 16 (4): 540–564. doi:10.1080/15564886.2020.1818658. ISSN 1556-4886. https://www.tandfonline.com/doi/full/10.1080/15564886.2020.1818658. 
  4. 4.0 4.1 Algarni, Abdullah (June 2019). "What Message Characteristics Make Social Engineering Successful on Facebook: The Role of Central Route, Peripheral Route, and Perceived Risk" (in en). Information 10 (6): 211. doi:10.3390/info10060211. 
  5. 5.0 5.1 Paradise, Abigail; Shabtai, Asaf; Puzis, Rami (2019-09-01). "Detecting Organization-Targeted Socialbots by Monitoring Social Network Profiles" (in en). Networks and Spatial Economics 19 (3): 731–761. doi:10.1007/s11067-018-9406-1. ISSN 1572-9427. https://doi.org/10.1007/s11067-018-9406-1. 
  6. 6.0 6.1 6.2 Ivaturi, Koteswara; Janczewski, Lech (2013-10-01). "Social Engineering Preparedness of Online Banks: An Asia-Pacific Perspective". Journal of Global Information Technology Management 16 (4): 21–46. doi:10.1080/1097198X.2013.10845647. ISSN 1097-198X. https://doi.org/10.1080/1097198X.2013.10845647. 
  7. 7.0 7.1 Ghafir, Ibrahim; Saleem, Jibran; Hammoudeh, Mohammad; Faour, Hanan; Prenosil, Vaclav; Jaf, Sardar; Jabbar, Sohail; Baker, Thar (October 2018). "Security threats to critical infrastructure: the human factor" (in en). The Journal of Supercomputing 74 (10): 4986–5002. doi:10.1007/s11227-018-2337-2. ISSN 0920-8542. 
  8. 8.0 8.1 8.2 Airehrour, David; Nair, Nisha Vasudevan; Madanian, Samaneh (2018-05-03). "Social Engineering Attacks and Countermeasures in the New Zealand Banking System: Advancing a User-Reflective Mitigation Model" (in en). Information 9 (5): 110. doi:10.3390/info9050110. ISSN 2078-2489. 
  9. Bleiman, Rachel (2020). An Examination in Social Engineering: The Susceptibility of Disclosing Private Security Information in College Students (Thesis). doi:10.34944/dspace/365.
  10. Chin, Tommy; Xiong, Kaiqi; Hu, Chengbin (2018). "Phishlimiter: A Phishing Detection and Mitigation Approach Using Software-Defined Networking". IEEE Access 6: 42516–42531. doi:10.1109/ACCESS.2018.2837889. ISSN 2169-3536. 
  11. Greitzer, Frank L.; Strozer, Jeremy R.; Cohen, Sholom; Moore, Andrew P.; Mundie, David; Cowley, Jennifer (May 2014). "Analysis of Unintentional Insider Threats Deriving from Social Engineering Exploits". 2014 IEEE Security and Privacy Workshops. San Jose, CA: IEEE. pp. 236–250. doi:10.1109/SPW.2014.39. ISBN 978-1-4799-5103-1. https://ieeexplore.ieee.org/document/6957309. 
  12. Irani, Danesh; Balduzzi, Marco; Balzarotti, Davide; Kirda, Engin; Pu, Calton (2011). Holz, Thorsten; Bos, Herbert. eds. "Reverse Social Engineering Attacks in Online Social Networks" (in en). Detection of Intrusions and Malware, and Vulnerability Assessment. Lecture Notes in Computer Science (Berlin, Heidelberg: Springer) 6739: 55–74. doi:10.1007/978-3-642-22424-9_4. ISBN 978-3-642-22424-9. https://link.springer.com/chapter/10.1007%2F978-3-642-22424-9_4. 
  13. 13.0 13.1 Heartfield, Ryan; Loukas, George (2018), Conti, Mauro; Somani, Gaurav; Poovendran, Radha, eds., "Protection Against Semantic Social Engineering Attacks", Versatile Cybersecurity (Cham: Springer International Publishing) 72: pp. 99–140, doi:10.1007/978-3-319-97643-3_4, ISBN 978-3-319-97642-6, http://link.springer.com/10.1007/978-3-319-97643-3_4, retrieved 2020-10-29 
  14. 14.0 14.1 14.2 14.3 14.4 Workman, Michael (2007-12-13). "Gaining Access with Social Engineering: An Empirical Study of the Threat" (in en). Information Systems Security 16 (6): 315–331. doi:10.1080/10658980701788165. ISSN 1065-898X. http://www.tandfonline.com/doi/abs/10.1080/10658980701788165. 
  15. Krombholz, Katharina; Merkl, Dieter; Weippl, Edgar (December 2012). "Fake identities in social media: A case study on the sustainability of the Facebook business model" (in en). Journal of Service Science Research 4 (2): 175–212. doi:10.1007/s12927-012-0008-z. ISSN 2093-0720. http://link.springer.com/10.1007/s12927-012-0008-z. 
  16. Workman, Michael (2008). "Wisecrackers: A theory-grounded investigation of phishing and pretext social engineering threats to information security". Journal of the American Society for Information Science and Technology 59 (4): 662–674. doi:10.1002/asi.20779. ISSN 1532-2882. https://onlinelibrary.wiley.com/doi/full/10.1002/asi.20779. 
  17. Boshmaf, Yazan; Muslukhov, Ildar; Beznosov, Konstantin; Ripeanu, Matei (2013-02-04). "Design and analysis of a social botnet" (in en). Computer Networks. Botnet Activity: Analysis, Detection and Shutdown 57 (2): 556–578. doi:10.1016/j.comnet.2012.06.006. ISSN 1389-1286. http://www.sciencedirect.com/science/article/pii/S1389128612002150. 
  18. Bhusal, Chandra Sekhar (2020). "Systematic Review on Social Engineering: Hacking by Manipulating Humans". SSRN Electronic Journal. doi:10.2139/ssrn.3720955. ISSN 1556-5068. http://dx.doi.org/10.2139/ssrn.3720955. 
  19. McCrohan, Kevin F.; Engel, Kathryn; Harvey, James W. (2010-06-14). "Influence of Awareness and Training on Cyber Security". Journal of Internet Commerce 9 (1): 23–41. doi:10.1080/15332861.2010.487415. ISSN 1533-2861. https://doi.org/10.1080/15332861.2010.487415. 
  20. Ghafir, Ibrahim; Saleem, Jibran; Hammoudeh, Mohammad; Faour, Hanan; Prenosil, Vaclav; Jaf, Sardar; Jabbar, Sohail; Baker, Thar (2018-10-01). "Security threats to critical infrastructure: the human factor" (in en). The Journal of Supercomputing 74 (10): 4986–5002. doi:10.1007/s11227-018-2337-2. ISSN 1573-0484. 
  21. Heartfield, Ryan; Loukas, George; Gan, Diane (2016). "You Are Probably Not the Weakest Link: Towards Practical Prediction of Susceptibility to Semantic Social Engineering Attacks". IEEE Access 4: 6910–6928. doi:10.1109/ACCESS.2016.2616285. ISSN 2169-3536.