Artificial Intelligence: Putting the AI in “brAIn”

What thinks like a human, acts like a human, and now even speaks like a human…but isn’t actually human? The answer is: Artificial Intelligence.

Yes, that’s right, the futuristic self-driving smart cars, talking robots, and video calling that we once saw in the Jetsons TV Show are now more or less a reality in 2022. Much of this is thanks to the development of Artificial Intelligence.

What is Artificial Intelligence?

Artificial Intelligence (AI) is an umbrella term that has many sub-definitions. Scientists have not yet fully agreed upon one single definition, but AI generally refers to a phrase coined by Stanford Professor John McCarthy…all the way back in 1955. McCarthy defined artificial intelligence as “the science and engineering of making intelligent machines”. He then went on to invent the list processing language LISP, which is now used by numerous industry leaders including Boeing (Boeing Simplified English Checker assists aerospace technical writers) and Grammarly (a grammar computer add-on that many of us use, and that coincidentally, I am using as I write this piece).  McCarthy is thought of as one of the founders of AI and recognized for his contributions to machine language.

Sub Categories and Technologies

Within the overarching category of AI are smaller subcategories such as Narrow AI and Strong AI. Beneath the subcategories are the technologies of Machine Learning and Algorithms that help the subcategories function and meet their objectives.

Narrow AI: Also known as “weak AI” is task-focused intelligence. These systems only focus on specific jobs like internet searches or autonomous driving rather than complete human intelligence.  Examples of this are Apple’s Siri, Amazon Alexa, and autonomous vehicles.
General AI: Also known as “strong AI”, is the overall combined AI components that rival a human’s ability to think for themselves. Think the robots in your favorite science-fiction novel. Science today still seems to be far from reaching General AI, as it proves to be much more difficult to develop as opposed to Narrow AI.

Technologies within AI Subcategories.

Machine Learning requires human involvement to learn. Humans create hierarchiesand pathways for both data input and outputs. These pathways allow the machine to learn with human intervention, but this requires more structured data for the computer.

Deep Learning allows the machine to make the pathway decisions by itself without human intervention. Between the simple input and output layers are multiple hidden layers referred to as a “neural network”. This network can receive unstructured raw data such as images and text and automatically distinguish them from each other and how they should be processed.

Both Machine and Deep Learning have allowed businesses, healthcare, and other industries to flourish from the increased efficiency and time saved through minimizing human decisions. It is possible that because this technology is so new and unregulated, we have been able to see how fast innovation can grow uninhibited.  Government regulations have been hesitant to tread in the murky waters of this new and unknown technology sector

Regulations.
Currently, there is no federal law regulating the use of AI. States seem to be in a trial-and-error phase, attempting to pass a range of laws. Many of these laws attempt to deploy AI-specific task forces to monitor and evaluate AI use in that state or prohibit the use of algorithms in a way that unfairly discriminates based on ethnicity, race, sex, disability, or religion. A live list of pending failed and enacted AI legislation in each state can be found here on the National Conference of State Legislatures’ website.

But what must go up, must come down. While AI increases efficiency and convenience, it also poses a variety of ethical concerns, making it a double-edged sword. We explore the ups and downs of AI below and pose ethical questions that might make you stop and think twice about letting robots control our world.

Employment

With AI emerging in the workforce, many are finding that administrative and mundane tasks can now be automated through the use of AI. Smart Contract systems allow for Optical Character Recognition (OCR) which can scan documents and recognize text from the uploaded image.  The AI can then pull-out standard clauses or noncompliant language and flag it for human review. This, however, still ultimately requires human intervention.

One growing concern with AI and employment lies in the possibility that AI may take over certain jobs completely. An example of this comes with the innovation of self-driving cars and truck drivers. If autonomous vehicles become mainstream for the large-scale transportation of goods, what will happen to those who once held this job? Does the argument that there may be “fewer accidents” outweigh the unemployment that accompanies this switch? And what if the AI fails? Could there be more accidents?

Chatbots

Chatbots are computer programs designed to simulate human communication.  We see these types in online customer service settings. The AI allows customers to hold a conversation with the Chatbot and ask questions about a specific product and receive instant feedback. This cuts down on waiting times and improves service levels for the company.

While customer service Chatbots may not spark any concern to the average consumer, the fact that these bots are able to engage in conversation that is almost indistinguishable from an actual human may pose a threat to other industries. We can forget about catfishing, now individuals will have to worry about if the “person” on the other side of their chatroom is even a “person” at all, or if it is someone who has designed a bot to elicit emotional responses from victims and eventually scam them out of their money.

Privacy

AI now gives consumers the ability to unlock their devices with facial recognition. It can also use these faces to recognize people in photos and tag them on social media sites. Aside from our faces, AI follows our behaviors and slowly learns our likes and dislikes, building a profile on us. Recently, the Netflix documentary “The Social Dilemma” discussed the controversy surrounding AI and Social Media use. In this film, we see the algorithm as three small men “inside the phone” who begin to build a profile on one of the main characters, sending notifications during periods of inactivity from apps that are likely to generate a response. With AI there seems to be a very fine line of what information is left undisclosed. We must be diligently aware of what we are opting into (or out of) to protect our personally identifiable information. While this may not be a major concern of those in the United States, it may raise concerns for civilians in foreign countries under a dictatorship that may use facial recognition as a tool to retain ultimate control.

Spread of Disinformation and Bias

AI is only as smart as the data it learns from. If it is fed data with a discriminatory bias or any bias at all (be it political, musical, or even your favorite movie genre) it will begin to make decisions based on that information.

We see the good in this – new movie suggestions in your favorite genre, advertising a sweater that you didn’t know you needed – but we have also seen the spread of false information across social media sites. Oftentimes, algorithms will only show us news from sources that align with our political affiliation because that is whom we tend to follow and engage with. This leaves us with a one-sided view of the world and grows the gap between parties even further.

As AI develops, we will be faced with new ethical questions every day. How do we prevent bias when it is almost human nature to begin with? How do we protect individuals’ privacy while still letting them enjoy the convenience of AI technology?

Can we have our cake and eat it too? Stay tuned in the next few years to find out…

 

Alls fair in Love and Romance Scams

In 2014, 81-year-old Glenda thought she had met the love of her life. The problem? Their entire relationship was virtual. The individual on the other end of Glenda’s computer sold her a fictional narrative that he was a United States citizen working in Nigeria. Glenda and this man developed their virtual “relationship”, never meeting in person. After some time, this man would ask Glenda for money to help his business and to get back to the United States. Glenda, wanting to help her love, immediately sent over the money. The requests became more frequent.  When the small money transfers weren’t enough, he asked her to open personal and business bank accounts to transfer funds between the United States and overseas.

Despite numerous warnings from the FBI, local police, and banks to stop, Glenda still believed the man she met online loved her and needed help. She continued illegally transferring money overseas for the next 5 years and would eventually plead guilty to two federal felonies. Glenda was a victim of a Romance Scam and paid the ultimate price.

Unfortunately, Glenda’s situation, while extreme, is far from a rare occurrence today. In 2021 alone, the Federal Trade Commission (FTC) saw consumers report $547 million in losses due to romance scams, a concerning 80% more than those reported in 2020. In total, the FTC has seen an astronomical $1.3 billion in cumulative romance scam losses reported in the last five years. And these are just the scams that were reported to the FTC. Many victims go without reporting due to the shame and stigma that comes with falling prey to an online scam.

Romance scams often referred to as “sweetheart scams” occur when an individual (or group of individuals) fabricates an online persona and targets vulnerable persons for money.

These scammers build a fake relationship with the victim through messages and build empathy and trust over a short amount of time. After the relationship is built, the scammer suddenly succumbs to financial and/or medical hardships. Their initial request for money is typically a small amount and the victim may be repaid the first time to negate any doubts that this is a scam; after the second, third, and fourth request, the victim is likely never see their funds (or their “love”) again.

The elderly population is especially vulnerable to online scams.  Seniors tend to be more trusting than younger generations and usually have significant financial savings (own their home, retirement savings, government benefits). Also due to cognitive decline and unfamiliarity with technology, this group is left at a disadvantage to defend themselves or recognize when someone is feigning friendship versus a genuine connection. Even more so in recent years due to COVID-19, the elderly have become even more vulnerable. Many were forced into isolation and could only stay in contact with family and loved ones by getting internet devices, opening up a whole new world. Unmonitored access to the internet coupled with increased loneliness made elders the perfect target for romance scams.

Are dating sites liable for promoting fraudsters to unsuspecting victims? The short answer is no.

Under 47 USC Section 230, interactive computer service providers (a.k.a. social media and dating sites) are immune from liability for claims arising out of the content that third parties publish to their sites.

In 2022, the Federal Trade Commission’s claims against Match Group Inc. (owner and operator of Match.com, Tinder, PlentyofFish, OkCupid, Hinge, and several other dating sites) asserting that:

  1. Match.com misrepresented to consumers that profiles were interested in “establishing a dating relationship”, but on numerous instances, these profiles were set up by individuals with the intent to defraud; and
  2. Match “exposed to consumers to the risk of fraud” by allowing accounts that were reported or flagged for fraud and under review to still exchange communication with other subscribers.

The Texas Northern District Court dismissed both counts, holding that under Section 230, Match was entitled to immunity from a third party’s fraudulent content and actions. It seems that if a victim is looking for recovery, they won’t find it in the courts or through the dating sites themselves.

This looks like a job for the FBI…

Or maybe not.

The Federal Bureau of Investigation engages its Internet Crime Complaint Center (IC3), Recovery Asset Team (RAT) and Financial Crimes Enforcement Network (FinCEN) to recover monetary losses from internet scams. Unfortunately, the FBI typically takes on international cases of single transfers over $50,000 that fall within a 72-hour reporting window. Most romance scammers typically request money from elderly victims in smaller amounts over an extended period (the median loss for romance fraud victims in their 70s is $6,450).  Due to this high threshold and short reporting window, a majority of romance scam victims never report their losses or see their money again.

In reality…YOU Are Your Best Defense.

Prevent

Do not send money to someone you have never met in person.

Advocate

Check in on your loved ones who are living alone. They may be less inclined to turn to virtual relationships and send money if they have real-life connections.

Check with banks and financial institutions about regular check-in schedules for elderly clients or talk with your loved ones to help monitor their accounts if you notice they are in a cognitive decline.

Report

If you or your loved one have been a victim of a romance scam, contact 1) your financial institution immediately; 2) report the fraud to the dating site to try and shut down the fraudster’s account; and 3) report the fraud to the Federal Trade Commission.

Skip to toolbar