Artificial intelligence (AI) has been a hot topic in recent years. From China’s payment systems to America’s internet tracking, everyone is investing millions of dollars in growing this industry. Although there are already so many technological developments, there is much to be discovered – including in sub-industries like human-like AI, virtual beings and virtual human assistants.
In the search for greater things to improve the quality of life for people, innovators are constantly creating things to increase convenience. While this can be helpful, it poses a threat to a number of things too.
Increased use of online therapy
Everything is done digitally nowadays. Even medical procedures like making bookings and therapy have online platforms. Such health sites can help to provide important information and allow patients to get necessary attention when needed.
However, anything that is done online has a risk of miscommunication. More than what you say, conversations are about expression, tone, and body language. When you have a conversation online, none of that is visible.
You only type what you want to say, and the therapist will have to trust you. If you miss an important detail or just say what you want to say, the therapist might end up misdiagnosing you or giving you an unsuitable solution.
If you do enjoy the typing process more than face-to-face conversation, you might be able to pour your heart out through the keyboard. But there is still a chance of miscommunication on your end. Once the therapist understands you and gives you advice, it is possible that the patient is the one that misinterprets it.
Additionally, there is also a risk of private information going public – now you have to worry about your health and wealth.
Weaker personal information security
The use of technology is everywhere, and it is causing everything to go digital. Press companies are going online, and most of your personal information is stored online. Confidential content is usually stored on computers or local servers, but when your device’s storage is full, you turn to clouds and invisible satellite storage systems.
Though clouds help to store large amounts of content online for free, they also lead to security risks. Everyone’s information is accessible to hackers that manage to bypass the security barriers and firewalls, which has proved to be fairly easy with celebrities’ private lives being leaked online in many separate cases.
To find out what consumers want, almost every company has some way to study their customers’ likes, dislikes, lifestyle, and needs. This means gathering customer information including details like full names, home addresses, and identification numbers. Whether this is done using AI, other forms of technology, or in-person, the companies have to store your information somewhere. Breaches in security can happen in big companies too, as seen in Facebook’s massive data leak.
The working force might not like AI due to decreased job availability, but companies love them for their efficiency. These bots are always working. But in turn, this means that there is constant supervision. Not only does this mean that there must be a human there to help solve new problems, but it also means that online users are always under watch by AI.
Risk of compromised cyber safety
The surging use of social media makes algorithms a familiar term. To study user patterns, many websites and companies use trackers like web bots and cookies. Websites can track your internet patterns to find out ways to make your experience more convenient as well as profitable for them. This is why you find that the advertisements you see usually mirror your site history.
Browser tracking is prevalent when using search engines like Google. As soon as you type in a word or phrase, various suggestions pop up based on your personal and the general public’s search habits.
When answering emails, predictive replies are available, and when you write “attached below” in the email without attaching anything, Gmail gives you a prompt. The accurate responses show how the software can scan your email – in minute detail.
On social media, your recommended follow list is also based on your app activity. Instagram is well-known for constantly adjusting its platform to identify the best way for its users to surf the app. Instead of arranging images in chronological order, it is based on your interaction levels with the various accounts. By pushing your favourite photos to the top, you will be able to browse through what you are most interested in, even when you are short on time.
Even seemingly simple applications like Microsoft PowerPoint use algorithms and user patterns to find out the best layouts for slides to attract attention and prevent people from getting distracted. Human patterns are constantly studied to better applications in order for users to maximise their time on the application.
Though all this means a more convenient social media and internet experience, it also means that all your activity is being tracked. There is almost always someone watching from some other end, so even your private information might not be that safe.
The ease of creating fabricated content
When SnapChat launched its face swap filter, everyone just thought it was fun and games. Well, it was – until another person saw it as an opportunity to create something more. Now, you can find websites and applications that morph and create lookalike pictures which some people misuse to fabricate photos.
It is even more dangerous because there are video options for this too. In the viral video shown above, ex-President of the United States, Barack Obama, is giving a fabricated speech. But everything looks so real. That video was created for laughs, but if someone were to make a controversial speech during the election period, it could’ve changed the results drastically.
These innovations can compromise the exclusivity of authentic videos and pictures from specific people as now they can be made with just a few clicks.
Sir Anthony Seldon, vice-chancellor or the University of Buckingham, told Mail Online that placing such AI technology in the wrong hands could end up in horror. He mentioned online predators and their new-found ability to impersonate celebrities or even teachers in order to exploit children.
AI is progressing to help do what humans are unable to do, but there is still some trial and error in the buffering stages. To contain the dangers, people should be careful and extra cautious when dealing with anything online. Unfortunately, we are usually over trusting and do not second guess that such online interactions could be extremely dangerous.
Sir Anthony’s greatest worry is that if humans program AI to think for themselves, they may go into Frankenstein mode, replicate themselves, and possibly take over the world.
AI chatbots and identity theft
People used to program technology with AI to do mundane human tasks. But now, AI is geared towards thinking and emotions to replicate human behaviour for better conversation and functions. Some chatbots are able to have more meaningful discussions besides the normal basic programming. Higher-tech AI can even develop a personality, be programmed to mimic someone’s habits, or even start mirroring your tendencies.
This is cool and experimental, but if misused, people can start abusing identities. The more people know about you, the easier it is to pretend to be you. With the help of AI, people can scan for information without scouring online for hours by themselves.
People should be cautious when dealing with chatbots, especially those on sketchy shopping sites. Personal information like credit card details and your home address can be tracked and saved for people to commit fraud. Usually, this can be dealt with using anti-malware toolkits, but one should always be wary as these AI technologies are constantly devolving and evolving.
Spambots and social media’s agenda
On the flip side, AI chatbots can also wrongly influence people. Because they are unable to think on their own, all their replies and discussion topics have to be programmed. Read: They say what their programmers want them to say.
Political chatbots are highly controversial. Even though they might be made to have a neutral standpoint, they might end up being biased based on the programmers’ view of what constitutes “neutral.” And because they are unable to go extremely in-depth with discussion and reply to completely new topics, they are limited to the replies and slogans that they know.
Sometimes, this might end up being crude, using the same slogans over and over again without any reason or explanation.
On 18 October 2018, there were 250,000 tweets that translated to “we all have trust in Mohammed Bin Salman” in reply to the disappearance of Saudi journalist Jamal Khashoggi. The tweets were so similar that people believed them to be from bots. Spambots can also be used to crowd social media with their own agenda, so people have to be proactive to make sure they check the sources of everything they read.