News Categories
Share Article
Share:

Ethics at Tech

By: Kelley Freund | Categories: Alumni Interest

example alt text

New technology has the potential to create a better future. But as new, smarter innovations are being introduced into our world, the potential for misuse has never been greater. Those who develop new tech are increasingly responsible for understanding what happens when their product heads out in the world.

Here we take a look at 10 issues at the intersection of technology and ethics, as told through the expertise of Georgia Tech students, faculty, and alumni.

Social Media

Kathy Pham, CS 07, MS CS 09, A fellow and faculty member at Harvard and the lead of Mozilla's Responsible Computer Science Challenge

The people living in Myanmar may have limited internet access, but they do have Facebook. In 2017, hateful anti-Muslim content was posted on the social media platform before and after state-led violence displaced 700,000 Rohingya Muslims. Facebook was later taken to task for being slow to mitigate the situation.

According to Kathy Pham, CS 07, MS CS 09, a fellow and faculty member at Harvard and the lead of Mozilla’s Responsible Computer Science Challenge, this scenario is a perfect example of a technology platform not fully understanding the community where they launched their product.

“They didn’t know that, for many people in Myanmar, Facebook is their main news source,” Pham says. “When we build technology, we have to make sure we understand the social human part and not just the tech part. We have to ask questions: How do people share information and communicate? How might our products negatively affect communities?”

For a long time, technology has ignored these questions, and developers don’t often think of building platforms as an interdisciplinary field. But Pham says it’s naive to disregard topics like social sciences, politics, policy, and even history when building technology. Because these platforms are now so ubiquitous, developers have the responsibility to bring people in the room who can help answer those questions. But what about removing or blocking content? Does that cross a line?

“Multi-billion-dollar companies with some of the smartest people around the world still haven’t solved this question. Only this year are we seeing companies take a stand about what they see as inappropriate content.”

Pham says there’s no perfect guidebook, but individual platforms must create their own set of values. “Social media platforms can be a place where racism spreads but can also be a place which highlights police brutality that people have never seen before—it can be both at the same time. A company has to figure out where it stands, and when something becomes too much. The moment the team is in crisis mode and debating, that’s a really hard environment to be in.”

Data Privacy

Peter Swire, a professor of law and ethics in the Scheller College of Business and Annie Anton, ICS 90, MS ICS 92, PhD 97, a professor in the School of Ineractive Computing.

The third amendment to the U.S. Constitution prohibits the quartering of soldiers in private homes, and it’s the favorite amendment of Peter Swire, a professor of law and ethics in the Scheller College of Business.

“Having a sergeant in your living room is a big invasion of privacy, and it was something King George did before the American Revolution,” he says. “It just illustrates that in each new generation, there are new ethical privacy issues.”

Today, our privacy concerns have more to do with the vast amounts of our personal data that are circulating, thanks to new technologies, from cell phones that track your location to social media platforms that track your social graph.

But who should have access to the data, and what should they be allowed to do with it? These are questions our society is currently grappling with, and Swire explores them in Privacy, Technology, Policy and Law, a course he co-teaches with Annie Antón, ICS 90, MS ICS 92, PhD CS 97, a professor in the School of Interactive Computing.

Antón’s research focuses on privacy from a software development perspective: How do we design systems that contribute to society, are trustworthy, respect privacy, and comply with the law? It’s not an easy feat, but experience has taught Antón that systems can be all those things when software engineers and lawyers work together.

“In our class, it takes both Prof. Swire and me to answer these questions,” Antón says. “And this whole field requires both kinds of people in the room to build something that works.”

Whether it’s soldiers in living rooms, cell phones, or facial recognition, each new privacy issue has required our society to learn to adapt. Tech’s newly created School of Cybersecurity and Privacy aims to address these larger issues in privacy.

In terms of laws, individual states are stepping up with their own. “We seem to be at a turning point, and the American political process is in the middle of big changes when it comes to privacy rules,” Swire says.

3D Printing

Amit Jariwala, Director of Design and Innovation for the Institute's George W. Woodruff School of Mechanical Engineering.

With more than 20 makerspaces across Georgia Tech’s campus, the faculty, staff, and students have the opportunity to explore their creativity, using everything from sewing machines and woodworking equipment to 3D printers.

The technology of 3D printing has been around since the 1980s, but gained notoriety in the 2000s and 2010s, decades that featured the first 3D-printed kidney, prosthetic limb, and car.

Today, companies around the world are using the technology to build things like affordable housing and personal protective equipment for healthcare workers, thanks to the development of faster and more efficient machines. These machines are also getting cheaper.

“Technology is democratizing innovation,” says Amit Jariwala, director of design and innovation for the Institute’s George W. Woodruff School of Mechanical Engineering. “If you have access to a 3D printer, you can create things that were not possible several years ago.”

So does that mean the Average Joe can use a 3D printer to create an item like a gun? At Georgia Tech’s makerspaces, students are not permitted to build anything that looks like a weapon but are encouraged to create items to develop their creative confidence and solve a problem to benefit society. While not all community makerspaces have nurtured such a positive culture, and while the average person can pick up a cheap 3D printer for $200, Jariwala says constructing objects like guns isn’t that easy. “A 3D printer is not as simple as ‘What you see is what you get.’ There is still an element of assembly and fabrication. You also need more-advanced printers to make more durable components.”

Jariwala thinks a bigger issue is ownership, as files can easily be copied and shared, leading to copyright issues similar to those created when people began using the internet to share music and movies. And questions regarding ownership lead to questions about responsibility. If a consumer modifies a manufactured product with a 3D printer and that product fails, who is held responsible? “This technology opens a lot of questions that regulation needs to catch up with.”

Deepfake Videos

Zachary Tidler, graduate student in the College of Sciences.

Zachary Tidler credits Ellen DeGeneres for his first exposure to deepfakes, or videos that use artificial intelligence to replace the likeness of one person with another. About five years ago, the talk show host posted a video of Pope Francis pulling the cloth off an altar, leaving everything on top still standing. And Tidler bought it.

While the Pope Francis deepfake was lighthearted, there are plenty of examples of the darker side of the technology. The faces of celebrities have been imposed on porn stars’ bodies, and politicians have been featured in videos saying words they never actually said in real life.

To make a deepfake video, a creator first trains a neural network on many hours of real footage of the person, then combines the trained network with computer-graphics techniques to superimpose a copy of the person onto a different actor. (This is how the late actor Paul Walker appeared in Fast & Furious 7.) Deepfake technology was originally only available to a high-level computer science community, but Tidler says that now it has been packaged in such a way that anyone with a moderately powerful computer can make a video, which means a lot of people have the ability to manipulate others and spread false information.

And many will believe that misinformation. Tidler conducted research for his master’s thesis on who is most susceptible to believing deepfakes. He found a strong correlation between affect detection ability—a person’s ability to read cues in another person’s eyes, face, or body language to determine how that person is feeling—and deepfake detection ability. “If you’re bad at spotting emotions in people’s faces, you’re more likely to be bad at spotting a deepfake video,” says Tidler.

The computer science community is trying to combat the problem. Last year, social media platforms including Facebook and Twitter banned deepfakes from their networks. Tidler says that Microsoft has a tool that gives a video a deepfake score, but that hasn’t completely solved the problem.

“It becomes something of an arms race because the deepfake networks and algorithms get a little better, and then the algorithms and neural networks trying to identify deepfakes get a little better, and so on,” Tidler explains.

Smart Cities

Nassim Parvin, associate professor of Digital Media in Tech's School of Literature, Media, and Communication and Director of Design and Social Justice Studio.

When people discuss the benefits of smart cities—which use distributed web sensors to collect data for use in the management of resources in real-time—there is a classic scenario they bring up. If an ambulance is trying to bring a patient to the hospital, the city’s central command and control center can turn the lights green along the route so the patient can get there faster.

“In theory, that’s nice because we will be saving lives,” says Nassim Parvin, associate professor of Digital Media in Tech’s School of Literature, Media, and Communication and director of the Design and Social Justice Studio. “But there is so much beyond these simple engineering scenarios to consider.”

For example, what happens when the patient reaches the hospital? They might wait in a long line because this is the only hospital nearby. Maybe they won’t be able to pay their bill because they don’t have health insurance. In this case, the speed of the ambulance is not the problem.

“So why are we spending so much time and money to get our emergency vehicles more efficient, when what we really need is to address the inefficiencies of our healthcare system?” Parvin asks. There are other things to think about when considering the investment of a smart city, such as hacking and the cost of materials. “We lack a systemic way of thinking about where we actually need to invest our time and money in order to mitigate some of our problems,” Parvin says. She believes the solution might lie in bringing humanities, social sciences, and ethics to conversations surrounding technology.

“In the absence of substantive, ethical education, students see ethics as restrictive, or feel it’s not up to them to think about these long-term ethical questions. But engineering asks, ‘What technologies can make our life better?’ That’s essentially an ethical question. This kind of education will make our students better designers and engineers and lead to more meaningful and effective technical policy intervention.”

Moral Algorithms and Self-Driving Cars

Nassim Parvin, associate professor of Digital Media in Tech's School of Literature, Media, and Communication and Director of Design and Social Justice Studio.

In the early 20th century, when children were injured by cars, the ensuing court cases ruled overwhelmingly in favor of car owners and drivers, blaming the incidents on their mothers’ negligence.

“Well, of course it’s the mother’s fault!” says Nassim Parvin sarcastically. Parvin is an associate professor of Digital Media in Georgia Tech’s School of Literature, Media, and Communication, where she also directs the Design and Social Justice Studio.

“But you can draw a straight line from those decisions and the fact that it’s too dangerous for children to play in the streets now.” Cars and car companies have been given a lot of power over our streets and public spaces, something we continue to do today with the advent of autonomous vehicles. Self-driving cars may soon have the right to make life-and-death decisions, such as braking to avoid a pedestrian, but thereby putting the people in the car at risk. The car makes these decisions through moral algorithms that are programmed into their software. And according to Parvin, that gives the car too much power in too critical a scenario.

“Ethical situations are, by nature, ambiguous, but moral algorithms depend on certainty and clear rules to make decisions. They shouldn’t handle these ambiguous cases, such as the literal life-or-death situations of self-driving cars, or things like who should get health insurance or who is eligible for a loan, which also affects people’s lives.”

But are there bigger questions here? What if we invest in more public transportation options? What if we exchange parking lots and driving lanes with sidewalks, bike lanes, and green spaces that have been shown to improve the physical and civic health of our communities? Can we imagine a future when kids actually play in the streets?

“It’s a failure of ethical imagination if we say the question about self-driving cars is just a matter of life and death at the intersection,” Parvin says. “You have to think about what it’s like to live in a city where at any moment, you can be the target of a killing algorithm. Is that a city we want to live in? That’s the ethical question.”

Genetic Testing

Michael Goodisman, associate professor in Georgia Tech's School of Biological Sciences

With companies like 23andMe and Ancestry.com, it has become somewhat common for people to have their genome sequenced. But what happens when results reveal you have a gene that could potentially make you sick?

Michael Goodisman, an associate professor in Georgia Tech’s School of Biological Sciences, says genetic testing is very different than getting tested for something like anemia.

“I not only learn something about myself, I learn something about all my relatives. If I find out I’ve inherited the BRCA1 mutation, which can lead to breast cancer, I know there’s a 50 percent chance my sister has it, and a 50 percent chance my children have it. Should I tell them? Does the doctor who diagnosed me have an obligation to tell them?”

But Goodisman says some of the most interesting ethical issues surrounding genetic testing concerns testing IVF embryos for genetic traits (or preimplantation genetic diagnosis). This is usually done when parents are carriers for a serious disease, and if results reveal the embryo has that disease as well, it won’t be implanted. This can present an ethical challenge, depending on how you view the embryo at that stage. It can also raise eyebrows if you’re not testing for disease, but for something like the embryo’s sex or other traits.

But others see it differently. “All medicine is trying to better the human condition,” Goodisman says. “Don’t we have an obligation to eliminate certain mutations that cause cancer? There’s an analogy about how we’re always building better airplanes. If you knew what would make an airplane better, wouldn’t you have to do it?”
What is the line between fixing problems and enhancing genetics? That question leads us to... genetic enhancements (and a bit of bioethics).

Genetic Enhancements (and a bit of bio ethics)

Michael Goodisman, associate professor in Georgia Tech's School of Biological Sciences

Goodisman says the simplest argument against genetic enhancement is that we can’t do it well right now. Another argument is that these technologies would only be available to people who could afford them. But the biggest argument against genetic enhancement is that you would be changing not only your health, but the health of future generations—your future generations.

But there is a counter argument. “If we have the ability to, let’s say, change a gene to make your heart stronger, shouldn’t we do that?” Goodisman asks. It’s not clear what the right answer is, and that’s something Goodisman teaches in his bioethics class. The course, which he describes as part science and part humanities, examines the process of science and the ethical implications of biological research.

“Many times, students learn how to do something, but they don’t have the opportunity to step back and think whether they should do it,” says Goodisman. “What makes an action ethical? What are the implications of an action? Are they good? We ask those questions.”

Goodisman says he hammers home three moral principles as the core of the class. The first is autonomy. An action is considered ethical if it allows individuals to do what they want, and it’s considered unethical to impose your own will on someone else. The second moral principle he teaches is beneficence, meaning an action is ethical if you are being kind to others. And the last principle is justice. This says we should treat equal things equally and unequal things unequally.

Here’s how those principles break down genetic enhancement: If someone wants to make their heart stronger, it would be ethical to give them the autonomy to do that. It would be kind because it would be enhancing their well-being. However, the action is not necessarily just, because not everyone can undergo genetic enhancement.

“There are very few actions that check all three boxes, and often there is no right answer,” Goodisman says. “It’s important to note two people can be ethical and arrive at very different conclusions.”

Cloning

Michael Goodisman, associate professor in Georgia Tech's School of Biological Sciences.

Whoever thought a sheep could become famous? But that’s what happened in 1996 when Dolly, a female domestic sheep, became the first successful case of the reproductive cloning of a mammal. Dolly was created using somatic cell nuclear transfer (SCNT), a technique that combines an egg cell, genetic material removed, with the nucleus from another donor’s somatic (or body) cells, such as skin or fat cells. Once combined, the egg cell will begin to divide and is capable of producing an adult organism containing genetic information identical to the original.

In 2018, researchers in China would go on to produce identical clones of a primate species—two macaque monkeys. While cloning techniques have gotten better, they are not perfect and they are not used on humans. But it is unclear where this will lead. As Goodisman teaches in his bioethics course, there’s not always a clear answer. “The arguments get complicated,” he says.

“This technology means every cell we have is potential life,” Goodisman says. “You can take my skin cell, pull out the genetic material from it, put it into an egg cell, and off you go. Does that mean I’m made up of a trillion potential people? That’s strange to think about.”

Artificial Intelligence

Jason Borenstein, Director of Graduate Research Ethics programs and associate director of the Center for Ethics and Technology at Georgia Tech.

Artificial intelligence can be used to help humans make fairer decisions and reduce the impact of human biases—but what comes out is only as good as what goes in. So what happens when the algorithms that are used in AI systems are biased?

In 2016, a study found that a criminal justice algorithm used in Florida mislabeled Black defendants as “high–risk” at almost twice the rate it mislabeled white defendants. Studies today reveal that algorithms used for facial recognition software successfully identify white faces more so than faces of people of color.

According to Jason Borenstein, who serves as the director of Graduate Research Ethics Programs and associate director of the Center for Ethics and Technology at Georgia Tech, AI systems learn to make decisions based on training data, and it’s possible that the data sample could be flawed, where certain groups are over- or under- represented. But biases can also come from people who design or train the systems and create algorithms that reflect unintended prejudices. That’s why Borenstein believes that diversifying the field of AI could make a difference.

“AI needs to be included in the broader diversity efforts that are happening across the country,” he says. “A more diverse community would be better able to spot bias.”
But ridding AI of biases doesn’t negate the need for humans to step in sometimes.

“We assume that technology can make better decisions than people,” Borenstein says. “But we can still be good at some decisions, which is why it’s important for humans to be involved in interpreting data.”

Besides bias, there are other ethical conversations surrounding AI: The more sophisticated the technology, the more AI systems “learn,” and the more unpredictable they become. Borenstein also brings up what’s known as the Black Box problem: We know a piece of technology works, but we don’t necessarily know how it works. Should we be using things when we can’t understand them? And finally, as AI is integrated more and more into our society, how will that impact our interactions? Will people want to interact with technology more than with one another?