Collection (7 items)

Applied Ethics 101

Entrepreneurship and ethics have not always gone hand-in-hand. But they should. Entrepreneurs and innovators develop technologies that have major consequences. This collection highlights 1) why applied ethics is a critical competency and 2) how to help train future leaders to integrate ethics into decision-making and technology development.

1 of 7

9 minutes

For the past few years, I have watched the news coming out of Silicon Valley closely. As someone who has spent the last four decades living, working and teaching in the San Francisco Bay Area, this might seem only natural. But as a former tech executive and entrepreneur who has taught entrepreneurship and innovation at Stanford University for almost 25 years, the news, especially the reports of scandals and wrongdoing, feels personal. Perhaps the news feels personal in part because I know some of the tech execs being written about. I have increasingly wondered what role I, as an entrepreneurship educator, play in this ecosystem as the news reports pile up and become so substantial that they can sustain book-length works.

To read such accounts about entrepreneurs shook me to a point of realization. For years, I have espoused the value of teaching every student to have an entrepreneurial mindset, and now, I must do better, as an educator, to help address the factors — both systemic and personal — that lead to these headline-making incidents.

In early March this year, I met an educator asking similar questions. At a UNC Chapel Hill business school lunch, philosopher and Professor Geoff Sayre-McCord introduced himself to a table of entrepreneurship educators and administrators by saying, “entrepreneurs face a moral liability: that they will become liars.” It was a bold remark to make in a business school setting, and I was intrigued.

What does principled entrepreneurship mean? Can it vary per entrepreneur? Are there any immutable principles for entrepreneurs?

For decades, I have taught my students how to dream big, secure funding, gather a team and entice customers and beneficiaries by crafting a promising vision of the future that is only tenuously based in fact. Certainly not all entrepreneurs are liars, Sayre-McCord clarified for me in follow-up conversations, but many of the incentives an entrepreneur encounters encourage deception.

Sayre-McCord’s point left me wondering: Is there a fundamental design flaw in the discipline of entrepreneurship education? Is believing in and selling a future vision all good or all bad for society? Is it time to shift the way we think about the role of ethics in entrepreneurship education?

As a faculty director of the Stanford Technology Ventures Program (STVP) since its inception, I have witnessed entrepreneurship education explode into a vast multitude of university-wide offerings. In terms of the pedagogy and content itself, entrepreneurship education has made great strides in the last three decades: educators have refined curricula based on empirical and theoretical research and broadened the discipline’s scope from early-stage startups to large organizations. Now, I believe it’s time for another shift for the coming decade.

With accumulating reports of wrongdoing, educators must take action to confront any systemic failings. We must equip our students, the next generation of entrepreneurial leaders, with frameworks for principled entrepreneurial action and knowledge – or more simply put, a tool kit for action-oriented practical ethics.

A Budding Discipline

For the last 25 years, I have been an advocate for expanding opportunities and offerings for entrepreneurship education intended for all college students. My passion for entrepreneurship education began with working in Silicon Valley in the 1980’s and early 1990’s after completing an MBA and Ph.D. at Berkeley. I had seen how new enterprises with new technologies could dramatically transform business-as-usual for the better.

During the formation of Symantec Corporation, I witnessed how software could safeguard sensitive information stored on the nascent personal computer and its networks. I eventually left Symantec after its IPO to co-found another venture producing software applications for early tablet computers with Dan Bricklin, an industry icon and highly principled inventor of the original spreadsheet program. In 1995 during the emergence of the Internet, I joined the faculty at Stanford’s School of Engineering full-time feeling quite optimistic about the power of entrepreneurship and technology to change the world.

In the mid-1990’s, entrepreneurship courses were confined to business schools as second-year electives for MBA candidates. Our mission at Stanford became to ensure that entrepreneurship courses would be available to every student, regardless of their major or school of study. So we founded STVP, an academic center for developing and spreading entrepreneurship education to all students at Stanford by operating within the Department of Management Science and Engineering.

The next two decades were exhilarating. To foster a community of teachers and scholars focused on entrepreneurship and innovation, Professor of the Practice Tina Seelig and I organized almost 50 roundtables on five different continents. We became allies with entrepreneurship educators at other universities to bolster our fledgling discipline. Accomplished scholars and esteemed practitioners including Professor Kathy Eisenhardt, Professor Bob Sutton and Adjunct Professor and serial entrepreneur Steve Blank joined us to develop curricula to teach the skills and behaviors of an entrepreneurial mindset. We saw students harness their creativity, critical thinking skills and technological capabilities to build new products that were helping solve the world’s most challenging problems. We worked continually to improve our teaching methods and curricula, making our discipline deeply grounded in empirical and theoretical research as well as expanding our lessons beyond small startups to large enterprises in business, education and government.

The message caught on even more quickly than we and our peers had expected it would. In about two decades, entrepreneurship education moved from a few MBA elective courses to a core university offering, both nationwide and around the globe. And entrepreneurship, especially entrepreneurship in the technology industry, has seen a similarly swift rise to prominence in all facets of society and the world’s economy.  Venture capital investments in early-stage companies has increased nearly sixfold in the US during that period, and according to a recent industry analysis, the American tech sector employs over 11 million people and contributes an estimated $1.6 trillion to the US economy. The result of tech’s rise has been many spectacular services and products, and sadly, the numerous scandals we have seen play out in recent years.

Room for Growth

Today, Stanford entrepreneurship faculty teach well over 100 courses per year. We educate thousands of students in every corner of the campus, teaching them the essence of an entrepreneurial mindset. We cover things like the importance of adaptability, opportunity recognition, resilience, leading teams and creativity. We give them frameworks that help ground debates about when to take risks. We train them to gather data like a scientist doing lab experiments. We push them to practice again and again to get comfortable with failures. We encourage them to be solution-oriented in the near term and opportunity-minded in the long term in order to scale their creations for maximum impact. We consider concepts like product-market fit and how to create business models that will entice investors and attract customers and partners.

Sometimes, though, students are interested in discussing the more personal, less positive aspects of entrepreneurship. Perhaps compelled by the same news stories that leave me feeling so concerned, they ask about how they should navigate ethically difficult situations in the future when working at a startup, when investors bear down on them, when founders breach ethics and when a colleague behaves in an unprofessional manner. With questions like these, I always find myself searching for better answers.

Because of classroom situations like this, I along with a few dedicated colleagues have spent the past year researching how values and ethics can be better taught in entrepreneurship courses. I am encouraged by the existing models for mission-driven organizations available in the field of social entrepreneurship. I am emboldened by the comprehensive scholarship and teaching of ethics programs that serve all students. I find further confidence in the conversations we’ve had with ethicists at law, medical, business and engineering schools. Those specialties in professional graduate schools began offering ethics courses long ago.

Our mission at Stanford became to ensure that entrepreneurship courses would be available to every student, regardless of their major or school of study.

Our goal is to create classroom environments that encourage students to establish their own principles-based worldview. What if each student creates their own personal mission statement, to define for themselves the values that will guide them? But to do that I have first have to ask questions of myself as an educator. I must find some approximate answer to questions like: What does principled entrepreneurship mean? Can it vary per entrepreneur? Are there any immutable principles for entrepreneurs?

A Different Kind of Value Creation

When teaching ethical behavior in my own classroom, I have found success using role-play simulations with students. For many years in the first course of our Mayfield Fellows Program at Stanford, the source of our simulation was the “Randy Hess” case study, based loosely on the MiniScribe fraud case of the 1980s and developed by the distinguished author Jim Collins. Students prepare by reading the case and during class, a student would volunteer for the role of “Randy Hess”. While some other students took on the role of other employees at the startup and its audit firm, I played the role of of the unethical CEO.

As that CEO, I would try to persuade the student playing Hess, a member of the finance department, to remedy a revenue shortfall with an illegal “adjustment” to the accounting records. Coming into the class, students were certain that they could not be compelled into doing something they know to be wrong. Then the roleplay would begin. As the CEO, I would use various negative influence techniques. If necessary, I even threatened (falsely) to give the student a bad grade in the course. The environment in the class would become tense, and more often than not the “Randy Hess” student relented. We would stop the case and talk about ways to protect yourself from the negative forces of influence and persuasion. They understood the importance of the conversation. The students had just seen how most everyone, even in role-play simulations, has a breaking point.

Role playing is the method I choose, but the material about motivation and persuasion comes from the work of psychologist and Professor Robert Cialdini. In his bestselling book Influence, Cialdini lists the six factors of influence: reciprocity, scarcity, authority, consistency, liking and consensus. In the Randy Hess role play, I would offer to do something nice for the student playing Hess and then ask for adjusted books in return (reciprocity). I would tell the Hess student that everyone does things like this sometimes (consensus). I’d appeal to the idea of Hess’s personality and sense of duty, suggesting that adjusting the books would make sense since “he’s the kind of guy who would help his company in a hard time” (consistency).

This exercise in teaching applied ethics, though, focuses more on not doing something unethical than envisioning a positive set of values. Though this model may not map perfectly onto secular educational contexts, I am intrigued by how students are urged to define and form their own codes of conduct within the entrepreneurship school at the University of St. Thomas, a Catholic school in Minnesota. There, Professor Laura Dunham and her colleagues promote  the value of “practical wisdom” — essentially, pursuing entrepreneurial projects while considering how a given project will affect its consumers and stakeholders. For Dunham, entrepreneurship isn’t amoral and at St. Thomas, entrepreneurship is taught as a way to create value beyond the financial kind.

A mindset like the “practical wisdom” framework taught by Dunham can be crucial for first-time entrepreneurs entering the current venture environment, where the perceived need for “speed” in building a new company can outweigh the need for building stakeholder trust. For startup founders looking to deliver on proposals made to financial backers, it can be easy to consider very little besides developing the product, without articulating a core set of principles and the company’s relationship to its clients and community.

I see this same kind of desire to consider entrepreneurship contextually in the venture capitalist Chi-Hua Chien, who assists me in teaching a large undergraduate course at Stanford regarding technology entrepreneurship. Chien lectures about the “five critical risks of entrepreneurship”: market, team, technology, product and business model. But after giving this lecture for several years, Chien has suggested that we add a sixth risk: values. As someone who leads a fund that invests in early-stage companies, he believes that venture capitalists benefit when they consider founders’ guiding principles. Sussing out principles can give investors a better sense of how founders will act in difficult, murky situations.

These aren’t one-size-fits-all recommendations, but I share them because I believe these models illustrate a way of thinking through the problem of how to teach principles and entrepreneurship. It seems to me that creating new teaching materials will mean teasing apart a few entwined threads:

  • First, we must consider what it means to have values and teach them in a secular, nonpartisan way.
  • Second, we must look at what methods best teach students about themselves in particular and entrepreneurial projects in general.
  • Third, we must consider the unique aspects of the entrepreneur’s environment, from a variety of investment structures, methods for experimentation and growth, and the cultural norms within different geographies.

Is it time to shift the way we think about the role of ethics in entrepreneurship education?

Re-Defining Entrepreneurship Education

Spending the past year researching applied ethics pedagogy has felt surprisingly similar to the first few years I spent at Stanford in the 1990’s, helping greatly expand the reach of the university’s entrepreneurship offerings and encouraging educators at other universities to do the same. As in those early years, I’ve had numerous conversations with others. More or less, I go to each asking the same question: Do you think ethics-related teaching materials can and should be improved in the entrepreneurship courses? The answer I receive is overwhelmingly a resounding yes.

In these conversations with educators, administrators, industry leaders and students, I feel an emerging groundswell — so many want to see innovation in the way educators teach entrepreneurial values. I hope that every educator, student, current and aspiring entrepreneur will join me in working to re-define entrepreneurship education, because it is crucial that students are equipped with the awareness, knowledge, skills and principles required to confront complex ethical challenges. Together, we must consider what educators haven’t yet done and what they can do to address this shortcoming in our incredibly vital discipline.

2 of 7

5 minutes

Imagine you’re in a self-driving car, headed to the grocery store. As you look out the window, you spot a school bus in the oncoming traffic. Out of nowhere, a biker swerves into your lane. Your car can do one of two things: hit the biker, or steer into the school bus.

Now imagine you’re one of the engineers who designed that self-driving car. You’re responsible for the small part of the car’s programming that determines how it will respond when unexpected people or objects swerve into its lane. Whose safety—whose life—should the car prioritize? How would you code what the car should do?

This thought experiment (loosely based on the so-called trolley problem) is growing less hypothetical by the day. Designing emerging technologies, including self-driving cars, creates a thicket of moral dilemmas that most engineers and entrepreneurs are not trained to negotiate.

What’s needed in tech and related fields, philosopher Jason Millar believes, is a greater ability to recognize, articulate and navigate the moral challenges posed by technology—what he describes as ethics capacity-building. Millar, a postdoctoral scholar at Stanford’s McCoy Family Center for Ethics in Society, has not only studied emerging philosophical questions in engineering, robotics, and artificial intelligence—he faced those issues during his years as a practicing engineer.

The trolley problem is an especially dramatic example of a challenge that exists across the technology sector: innovation is happening so rapidly that it’s flying past serious ethical questions without fully addressing them, or recognizing them as ethical questions at all. Russian propaganda designed to undermine the US presidential election flourished unchecked on Facebook and Twitter. Waze users are reshaping traffic flow in communities across the US, dramatically altering the character of once-quiet neighborhoods.

The teams at Facebook, Twitter and Waze likely never imagined themselves as arbiters of free speech or urban design. But, in Millar’s view, that’s part of problem: whether they think they are or not, engineers have become de facto policymakers, embedding social values and preferences into the products they design. Millar argues engineers need to get better—and soon—at understanding the downstream social effects of the technology they’re creating.

Whose safety—whose life—should the car prioritize? How would you code what the car should do?

The trolley problem and vexing issues of privacy and free expression will never be easy to solve. But by gaining more awareness of the implications of their choices, Millar argues engineers and entrepreneurs will actually design better technology.

Here are four principles, based on Millar’s work, which can help entrepreneurs and engineers seeking to incorporate ethical thinking into their work—no philosophy degree required.

Start by viewing ethics as essential to good design

Ethics and innovation aren’t at war, Millar emphasizes. “When I talk about thinking about design as an ethical challenge, as well as a technical challenge, I’m thinking about avoiding social failures,” he says—such as Facebook’s role in the spread of Russian propaganda, and Uber’s use of surge-pricing during natural disasters.

Incorporating ethics into design isn’t just about making lawyers happy. When entrepreneurs probe more deeply into the social and moral implications of their products, it can help their products thrive—and allow them to avoid costly or embarrassing missteps that might alienate them from potential customers.

“If you avoid those social failures, you just have a better chance of your technology succeeding. Your business model has to be aligned with the social realities and the ethical realities of your user base,” he says.

Incorporate ethics into the design room

Millar has seen the power of ethical thinking to address design problems in ethics workshops he’s developed, with Stanford Professor Chris Gerdes, at the Center for Automotive Research at Stanford. The workshops focus on the seemingly straightforward question of how self-driving cars should navigate pedestrian intersections. They introduce the engineers to the many stakeholders in that car-pedestrian interaction—car manufacturers, government regulators, municipal governments responsible for signage, vulnerable road users, such as individuals with disabilities, and so on—and asks them to imagine what’s important to each of those stakeholders.

Gradually, the engineers stop focusing solely on technical issues (when should the car slow down, and by how much?) and begin grappling with the competing human values at play. What’s ideal for one user might not be for another. A good design solution, the engineers often begin to realize, is one that balances the concerns and priorities of a broader group of stakeholders.

“The value in this work is just in allowing [engineers] to reframe the problem in terms of people and values in order to develop more satisfactory design options,” Millar says.

Borrow from existing models, such as the health care industry

Can an entire industry get better at handling vexing ethical issues? Millar thinks it’s possible—because health care already did.

Fifty years ago, behavior unthinkable to us today, such as lying to patients (ostensibly for their own good) or not receiving patient consent before administering treatment, were a routine part of Western medicine.

Today, however, the practice of informed consent is enshrined in law and medical ethics and clinicians are much more aware when they’re dealing with ethical issues. An entire field devoted to ethical questions in biology and medicine emerged. Today, bioethicists sit on the faculties of medical schools across the country, and clinical ethicists provide valuable services to health care providers and patients on ethical-decision making.

Many of these ideas could be adapted to the technology sector and incorporated into the engineering workflow. Companies could convene groups like hospital ethics boards or institutional review boards to review their new products and features, and flag potential issues before release. And just as bioethicists work alongside physicians and researchers, staff ethicists could counsel engineers and consult on especially complex projects. Workshops like the ones Millar leads at CARS might give engineers new tools to identify ethical challenges in their work.

“Your business model has to be aligned with the social realities and the ethical realities of your user base.”

Jason Millar

Make ethics everyone’s job

When Millar was training to be an engineer, he got a typical ethics education, mostly aimed at preventing physical harm: don’t take on projects for which you aren’t qualified. Double-check everything. Never sign off on a design you haven’t reviewed carefully.

These were good lessons, but they didn’t prepare Millar for the quandaries he faced as a working engineer. While doing chip design for a telecommunications company, he began getting requests to build in the technical capabilities that would allow the National Security Agency to spy on consumers.

Features like the ones he was asked to build “have ethical implications, but those implications aren’t the type that I was trained to pay attention to,” Millar says. “It wasn’t a business ethics question. It was more of a technology ethics question: should we be participating in developing the kinds of technologies that do X, Y or Z?” Questions like these ultimately drove Millar’s decision to return to school and study ethics.

Though Millar doesn’t see a one-size-fits-all solution, he does suggest that engineers begin to view design ethics as part of their job. Devote more time to thinking about what you’ve been asked to make, and why. Read widely, and don’t shy away from academic literature that might be relevant to your work. If your company doesn’t have an ethics board or an ethicist on staff, ask for them—and for additional ethics training. If you think the feature you’ve been asked to build has social consequences that haven’t been discussed, tell your manager and colleagues. “Young engineers doing that—that’s a good thing,” Millar says.

Changing the culture of the industry isn’t something individual engineers can or should do alone. Millar believes that many stakeholders, from policy makers to philosophers, have a role in making technology more responsive to ethical concerns.

“It’s just like when you’re doing mechanical design,” Millar reflects. “It’s not just that we need better gear design, or better materials—we need all of that stuff, and then we need a way of thinking about how to make the gears, and we need policies in place to make sure that when we’re making gears, we’re using the right types of materials.

“Designing technology is complex, but we do it all the time. Confronting ethics is just a matter of adding new things to the toolkit—new processes and new ways of thinking about problems.”

3 of 7

3 minutes

4 of 7

2 minutes

5 of 7

4 minutes

6 of 7

5 minutes

I require my students at Stanford to write a failure résumé. That is, to craft a résumé that summarizes all their biggest screw-ups — personal, professional, and academic. For every failure, each student must describe what he or she learned from that experience. Just imagine the looks of surprise this assignment inspires in students who are so used to showcasing their successes. However, after they finish their résumé, they realize that viewing experiences through the lens of failure forced them to come to terms with their mistakes and to view them as a great source of data about what works and what does not.*

On the most basic level, all learning comes from failure. Think of a baby learning to walk. He or she starts out crawling and falling before finally mastering the skill that as an adult we take for granted. As a child gets older, each new feat, from catching a baseball to doing algebra, is learned the same way, by experimenting until you are finally successful. We don’t expect a child to do everything perfectly the first time, nor should we expect adults who take on complex tasks to get it all right the first time.

So, how do you prepare yourself for inevitable failures? People who spend their time on creative endeavors know that failure is a natural part of the creative process and are ready when it happens. Jeff Hawkins, founder of Palm, Handspring, and Numenta, gets worried when things go too smoothly, knowing that failure must be lurking around the corner. When he was running Handspring, everything was going swimmingly for the release of the original “Visor”, a new personal digital assistant. But Jeff kept warning his team that something would happen. And it did. Within the first few days of the release of their first product they shipped about 100,000 units. This was remarkable. But the entire billing and shipping system broke down. Some customers didn’t receive the products they paid for, and others received three or four times as many units as they ordered. This was a disaster, especially for a new business that was trying to build its reputation. So what did they do? The entire team, including Jeff, buckled down and called each and every customer. They asked each person what he or she had ordered, if they had received it, and whether they had been billed correctly. If anything wasn’t perfect, the company corrected it on the spot. The key point is that Jeff knew something would go wrong. He wasn’t sure what it would be, but was prepared to deal with anything that came their way. His experience has taught him that failure is inevitable, and that the key to success is not dodging every bullet but being able to recover quickly.

Most individual’s paths are riddled with small and enormous failures. The key is being able to see these experiences as a chance to evaluate valuable data, and to move on. For most successful people, the bottom is lined with rubber as opposed to concrete. When they hit bottom, they sink in for a bit and then bounce back, tapping into the energy of the impact to propel them into another opportunity. A great example is David Neeleman, the founder of JetBlue. David initially started an airline called Morris Air, which grew and prospered, and he sold it to Southwest Airlines for $130 million. He then became an employee of Southwest. After only five months David was fired. He was miserable working for them and, as he says, he was driving them crazy. As part of his contract he had a five-year non-compete agreement that prevented him from starting another airline. That seemed like a lifetime to wait. But after taking time to recover from this blow, David decided to spend that time planning for his next airline venture. He thought through all the details of the company, including the corporate values, the complete customer experience, the type of people they would hire, as well as the details of how they would train and compensate their employees. David says that getting fired and having to wait to start another airline was the best thing that ever happened to him. When the non-compete period was over, he was ready to hit the ground running. He was able to turn what seemed like a terrible situation into a period of extreme productivity and creativity.

David Neeleman, in 2003, shares this story at the Entrepreneurial Thought Leaders Seminar.

Tackling new challenges in all domains requires a willingness to take risks, and a high chance of failure. However, risk taking is not binary. Each of us is comfortable taking some types of risks and find other types quite uncomfortable. You might not even see the risks that are comfortable for you to take, discounting their riskiness, but are likely to amplify the risk of things that make you more anxious. For example, you might love flying down a ski slope at lightning speed or jumping out of airplanes, and don’t view these activities as risky. If so, you’re blind to the fact that you’re taking on significant physical risk. Others, like me, who are not physical risk takers, would rather sip hot chocolate in the ski lodge or buckle themselves tightly into their airplane seats than strap on a pair of ski boots or a parachute. Alternatively, you might feel perfectly comfortable with social risks, such as giving a speech to a large crowd. This don’t seem risky at all to me. But others, who might be perfectly happy jumping out of a plane, would never give a toast at a party.

By my accounts, there are six primary types of risk: physical, social, emotional, financial, intellectual, and ethical. For example, I know that I’m comfortable taking social risks but not physical risks. In short, I will readily start a conversation with a stranger, but please don’t ask me to bungee jump off a bridge. I will also happily take intellectual risks that stretch my analytical abilities, but I’m not a big financial risk taker. On a trip to Las Vegas I would bring only a small amount of cash, to make sure I didn’t lose too much.

I ask my students to map their own risk profile. With only a little bit of reflection, each person knows which types of risks he or she is willing to take. They realize pretty quickly that risk taking isn’t uniform. It’s interesting to note that most entrepreneurs don’t see themselves as big risk takers. After analyzing the landscape, building a great team, and putting together a detailed plan, they feel as though they have squeezed as much risk out of the venture as they can. In fact, they spend most of their efforts working to reduce the risks for their business.

If you do take a risk and happen to fail, remember that failure is a natural part of the learning process. And, most important, if you aren’t failing sometimes, then you probably aren’t taking enough risks.


7 of 7

8 minutes

Five years ago, Stanford History of Science Professor Londa Schiebinger was in Madrid and interviewed by some Spanish newspaper reporters. When she returned home, she put the articles through Google Translate and was shocked to see that she was repeatedly referred to as “he.”

Oops — of all the people for this to happen to. Schiebinger has spent the last three decades exploring the intersection of gender and science, and her current work on Gendered Innovations in Science Health & Medicine, Engineering, and Environment at Stanford University focuses on how to harness the creative power of “gender analysis” for discovery and innovation.

Google’s algorithmic fail became fodder for a case study on gender biases in machine learning, with Schiebinger inviting two experts in natural-language processing to a workshop at Harvard University. After listening for about 20 minutes, the one from Google said, “We can fix that!”

“Fixing it is great, but constantly retrofitting for women is not the best road forward,” Schiebinger states in her Gendered Innovations case study. “To avoid such problems in the future, it is crucial that computer scientists design with an awareness of gender from the very beginning.”

Such an issue may amount to no more than a minor bug for a business as big as Google. But a failure to consider diverse users in the design of a product at smaller firms like startups could seriously limit their financial future by overlooking or alienating potential markets and user needs — or even harming the business’s brand reputation.

Take Snapchat. In August, the image-messaging app introduced a filter that puts extremely slanted eyes, rounded cheeks and buckteeth on a face. In a blog post by Katie Zhu, a member of the product and engineering team at the publishing platform Medium, she said Snapchat called the filter “anime-inspired.”

“Anime characters are known for their angled faces, spiky and colorful hair, large eyes and vivid facial expressions,” Zhu wrote in her post “I’m deleting Snapchat, and you should too.” “This is quite literally yellowface, a derogatory and offensive caricature of Asians.”

“Fixing it is great, but constantly retrofitting for women is not the best road forward.”

– Stanford Professor Londa Schiebinger

This wasn’t the only instance in which Snapchat has sparked criticism within industry and from individual users over filters that superimpose racially stereotypical traits onto photos of faces. Four months prior, Wired wrote about how Snapchat released a “Bob Marley filter” on April 20, the day marijuana lovers celebrate their shrub of choice. The headline said it all: “Welp! Snapchat’s 420 Filter Celebrates Bob Marley with Blackface.”

The Venice-based startup issued a statement after the article appeared, explaining that the filter was created “in partnership with the Bob Marley Estate” to give fans a way to show their appreciation of the reggae legend. Snapchat has also spurred comments in mainstream press outlets such as Business Insider, and on sites like Medium and Quora, for a lack of transparency regarding its total number of employees and how many of them are women or minorities.

Yes, Snapchat’s popularity seems to be growing by the day. But it’s uncertain how much the firm can expand its user base beyond those over 25 and into the more lucrative demographic groups that advertisers desire. Moreover, systemic issues around diversity that get ignored early on just might grow into deal-breakers that make a hot, young startup less attractive to potential suitors in the years ahead.

That seems to be playing out at Twitter. On Oct. 27, the San Francisco public radio station KQED reported the startup’s plans to lay off 300 people, and how it may be tied to the company’s unresponsiveness to harassment by “trolls.” For KQED’s California Report, journalist Queena Kim quoted a Bloomberg analyst who described how Twitter’s user growth and advertising dollars are both flattening out, and how that may be a result of all the negative content.

Kim then spoke to a BuzzFeed reporter who said both Disney and Salesforce may have walked away as potential buyers in part because of Twitter’s seeming indifference toward offensive comments. Kim went on to explain how women and minorities who work at Twitter had brought the issue to the attention of senior management as far back as 2008, and how that leadership remains largely white and male.

The report ended with a comment by Kellie McElhaney, an adjunct faculty member at the Haas School of Business at the University of California, Berkeley: “This is an example of how lack of diversity is bad for business.”

(Since that report, Twitter announced several new measures aimed at curbing hate speech — although a New York Times article questions if features such as the ability to hide or report offensive posts will have any lasting impact.)

Data on Diversity … or Lack Thereof

While concerns about the underrepresentation of women and minorities in tech isn’t new, the heightened awareness of the problem can be traced back to October 2013, when Tracy Chou, a young female engineer at Pinterest, blogged about the absence of data on gender diversity in the industry.

Other luminaries of the valley, such as Eric Reis and Vivek Wadhwa, had already pointed out that the local tech community mostly consisted of white men. But Chou did something clever: Alongside her blog post on Medium, she set up an online repository and encouraged tech workers throughout the valley to count the number of female engineers at their companies and share them.

“Systemic issues around diversity that get ignored early on just might grow into deal-breakers that make a hot, young startup less attractive to potential suitors in the years ahead.”

She started by posting Pinterest’s stats: 11 female engineers out of 89 total at the time. Soon after, dozens of other tech firms contributed theirs, including Dropbox, Reddit and Mozilla. A few months later, Google publicly reported its figures on ethnic and gender diversity, and then Apple, Facebook, Twitter and other tech giants followed suit.

“Once all that data was out there, this thing which had been an open secret in Silicon Valley for a long time became known to the rest of the world as well,” Chou said on the Stanford Innovation Lab podcast. “It became a really big topic of conversation when people realized that these companies [that] are producing technology and products that everyone is using were so not representative in their workforces of the people that they were trying to serve.”

Recently, the U.S. Equal Employment Opportunity Commission published a report on diversity in high-tech that summarized data on the race and sex of all employees at the top 75 tech firms in the valley (as ranked by the San Jose Mercury News in 2015). The data was pulled from government-mandated diversity reports, called “EEO-1” filings.

Of the nearly 210,000 employees across 230 work sites throughout the valley belonging to the top-ranked companies, here’s what the commission found: Seven out of 10 employees were men, while the percentages of black and Hispanic workers were 3 and 6 percent, respectively.

(Courtesy of U.S.Equal Employment Opportunity Commission)

“What is striking in this table is the degree of sex and race segregation,” the report states. “Women comprise just 30 percent of total employment and Asian Americans and Whites comprise 88 percent of all employment.”

Study after study has found that more diversity is better for business. Just to cite one report from 2015, the business-consulting giant McKinsey & Co. examined proprietary data sets for 366 public companies and found that more diverse workforces performed better financially.

Specifically, companies in the top quartile for racial and ethnic diversity were 35 percent more likely to have financial returns above their respective national industry medians. Meanwhile, firms in the top quartile for gender diversity were 15 percent more likely to top their industry medians in returns.

“Given the higher returns that diversity is expected to bring,” the report concludes, “we believe it is better to invest now, since winners will pull further ahead and laggards will fall further behind.”

Earlier Interventions Needed

The glitch in Google Translate that converted references to Londa Schiebinger to “he” was based on the fact that phrases such as “he said” are more commonly found online than “she said.” Schiebinger’s case study on this fits into a category she calls “bias in big data,” and she has since discovered newer examples of unintentional bias in algorithms.

She notes, for instance, that Amazon’s same-day delivery service was unavailable for zip codes in predominantly black neighborhoods at one point. She also notes the now well-known example of Google’s photo app, which applies automatic labels to pictures in digital photo albums. Early versions of the app inadvertently identified African-Americans as gorillas. Google apologized, saying it was unintentional.

Specifically in the area of product design, Schiebinger discovered examples of likely gender bias in software at Pinterest and Apple:

  • At Pinterest, she saw how the failure to consider men as users may have taken a financial toll. Between 2011 and 2015, Pinterest’s valuation rose from $200 million to $11 billion. Yet, only 29 percent of Pinterest users were men in 2014. “It is not only women who are left out, but also men. Pinterest is currently seeking to revamp its search algorithms to reach out to men,” Schiebinger said. “Products that incorporate the smartest aspects of gender and meet the needs of diverse user groups can open new markets and enhance global competitiveness.”
  • When Apple released its HealthKit app in 2014, it boasted the ability to record blood pressure, daily steps and calories, respiratory rate and even blood-alcohol level. But it left out women’s menstrual cycles. Apple updated it a year later, but Schiebinger questioned the cost in terms of profit, poor publicity and team morale.

Tracy Chou now works on increasing diversity at tech companies through an organization called Project Include, which she co-founded with other influential women in the industry. They describe the project as “an open community working toward providing meaningful diversity and inclusion solutions.” Beyond tactics like anonymizing job-applicant resumés, Chou says Project Include offers comprehensive recommendations and is taking steps to put them into practice.

One is called Startup Include, where companies that participate commit to metrics that they will share with Project Include after three and six months to measure progress. Much like the tech industry itself, the project takes an open source and data-driven approach, with the goal of aggregating metrics across a cohort of firms and developing benchmarks.

“It’s much easier to change the course early on and set the right culture, than trying to steer a really massive ship later.”

– Tracy Chou, Project Include

“We’re focused specifically on tech startups, where we think there’s just so much opportunity to get things right from the beginning,” said Chou, who earned degrees in electrical engineering and computer science from Stanford. “It’s much easier to change the course early on and set the right culture, than trying to steer a really massive ship later.”

Among the startups working with Project Include are Asana, Clef, Managed by Q, Patreon, Periscope Data, PreK12Plaza, Puppet, Truss and Upserve. But just as success doesn’t happen overnight in the startup world, Chou says the effort to increase diversity in tech will require conviction and patience commensurate with the problem at hand.

“There isn’t a quick fix for diversity and inclusion,” she says. “It has to be prioritized on an ongoing basis and it takes a lot of hard work. But it’s worth the effort.”