The post UC Berkeley ranked No. 1 for generating startup founders, companies and female entrepreneurs appeared first on UC Berkeley Sutardja Center.
Kevin Kung, co-founder of Takachar, began his climate innovation journey at UC Berkeley’s Blum Center through the 2015 Big Ideas Contest. Now hosting California Climate Action Fellows, Takachar helps transform agricultural waste into bioproducts, emphasizing a human-centered approach that addresses environmental challenges and supports underserved communities.
The post From AI to Social Justice, MDevEng Class of 2025 Sets Sights on Global Problem Solving appeared first on Blum Center.
Your cell phone has smokestack emissions. So too does your electric vehicle. The simple reason for this is that, here in the US, only 3.6% of energy supply in 2023 came from renewable sources such as wind, solar, hydroelectric and geothermal. Fossil fuels remain our predominant way of generating electricity.
The picture is equally bleak on a worldwide basis, with fossil fuels meeting the bulk of our energy demands, turning every electrically powered device, from cell phone, to electric vehicle, to data center server, into an exhaust-spewing challenge to Mother Nature.
In our increasingly digitized world, the needs from computation, and artificial intelligence in particular, are now creating a profound new climate stress. As the use of AI spreads into all aspects of human life and business, it extends ever more its demands for power and water.
Just how much power and water are we talking about? AI-driven data center power consumption is slated to soon reach 8.4 TWh (Chien 2023) which “is 3.25 gigatons of CO2, the equivalent of 5 billion U.S. cross-country flights”. Yale’s School of the Environment found that “Data centers’ electricity consumption in 2026 is projected to reach 1,000 terawatts, roughly Japan’s total consumption” (Berreby 2024), while researchers at UC Riverside found that the global AI demand for water in 2027 would be 4.2-6.6 billion cubic meters, roughly equivalent to the consumption of “half of the United Kingdom” (Li et al. 2023). Ouch.
To address the accelerating demands of AI’s energy consumption, the ideal solution would be to transition to 100% renewable energy, but this goal is currently distant. A more feasible approach is the syncretic one, combining specialized AI hardware and software, innovative data center designs, and the implementation of comprehensive AI policies, including regulation. This discussion will outline current strategies for reducing AI’s energy demands, with many solutions derived from software technology, thus enhancing their accessibility to AI practitioners.
To address the heat generated by AI computations in data centers, which necessitates significant cooling energy, multiple different cooling methods can be employed. Free air cooling, which uses outdoor air to cool indoor environments, is highly efficient and uses minimal water but only works in cooler climates. Evaporative (adiabatic) cooling also provides efficient cooling with low power and water usage. Some recent designs utilize submersion cooling, where hardware is immersed in a dielectric fluid that transfers heat without conducting electricity, thus eliminating the need for traditional air conditioning. Conversely, mechanical air conditioning is least effective due to high power and water costs.
While nuclear power via small modular reactors (SMRs) is sometimes proposed as a cleaner energy solution for data centers, their deployment will take years, and they face similar safety, waste, and economic concerns as larger reactors. Instead, focusing on renewable energy sources and storage solutions, such as solar power, may offer more immediate benefits for meeting data center energy needs.
Since the mid-19th century, internal combustion engines have evolved into highly specialized devices for various applications, from lawnmowers to jumbo jets. Today, AI hardware is undergoing a similar evolution, with specialized, high-performance processors replacing less efficient general-purpose CPUs. Google’s introduction of the tensor processing unit (TPU) in 2015 marked a bellwether advance in custom-designed AI hardware. NVIDIA’s GPUs, which excel in parallel processing for deep learning and large-scale data operations, have driven substantial growth in both sales and stock valuation. As AI demand increases, we can anticipate a proliferation of vendors offering increasingly specialized processors that deliver superior computation speeds at lower energy costs.
Recent examples of this inevitable wave include Groq, a company building a language processing unit (LPU). Groq claims its custom hardware runs generative AI models similar to those from OpenAI “at 10x the speed and one-tenth the energy”. Also included in the list is Cerebras Systems’ wafer-scale chip, which “runs 20 times faster than NVIDIA GPUs”. Then there’s Etched’s Sohu ASIC, which burns the transformer architecture directly into the silicon, so “can run AI models an order of magnitude faster and cheaper than GPUs”; SiMa.ai which claims “10x performance”; and of course Amazon’s Trainium, Graviton and Inferentia processors.
The future of chip innovation may lie in biomimetic design, inspired by nature’s energy-efficient intelligence. Technologies like those developed by FinalSpark are expected to contribute to this trend. However, access to specialized processors is likely to remain a competitive landscape, with smaller companies facing particular challenges.
Data center cooling equipment includes chillers, pumps and cooling towers. Can this equipment be run in an optimal fashion in order to maximize cooling while minimizing energy? In 2016, engineers at Google did just that, implementing a neural network (Evans & Gao 2016) featuring five hidden layers with 50 nodes per layer, and 19 discrete input variables, including data from the cooling equipment and weather conditions outdoors. Trained on 2 years’ worth of operating data, this neural network succeeded in reducing the energy used for cooling Google’s data centers by a whopping 40%. (Despite this, Google’s greenhouse gas emissions have skyrocketed 48% in the past 5 years thanks to the relentless demands of AI.)
In addition to using software to orchestrate the cooling machinery in a data center, the same can be done with the AI applications running there. By optimizing what gets run (if at all), where it’s run and when it’s run, data centers can achieve substantial energy savings on their AI workloads. (As a simple example, imagine moving an AI workload from the afternoon to the early morning hours and saving 10% of the energy right off the bat.).
Orchestration of AI falls into two categories: orchestrating the AI training process, and orchestrating AI at inference (runtime). There are a number of different approaches being taken in the orchestration of AI training. Two promising tacks are power-capping (McDonald et al. 2022) and training performance estimation (TPE; Frey et al. 2022). In the former, standard NVIDIA utilities were used to cap the power budget available for training a BERT language model. Though the power cap led to a longer time-to-train, the resulting energy savings were material, with “a 150W bound on power utilization [leading] to an average 13.7% decrease in energy usage and 6.8% increase in training time.”
TPE is based on the principle of early stopping during AI training. Instead of training every model and hyperparameter configuration to full convergence over 100 epochs, which incurs significant energy costs, networks might be trained for only 10-20 epochs. At this stage, a snapshot is taken to compare performance with other models, allowing for the elimination of non-optimal configurations and retention of the most promising ones. “By predicting the final, converged model performance from only a few initial epochs of training, early stopping [of slow-converging models] saves energy without a significant drop in performance.” The authors note that “In this way, 80-90% energy savings are achieved simply by performing HPO [hyperparameter optimization] without training to convergence.”
Approximately 80% of AI’s workload involves inference, making its optimization crucial for energy reduction. An illustrative example is CLOVER (Li et al., 2023), which achieves energy savings through two key optimizations: GPU resource partitioning and mixed-quality models. GPU partitioning enhances efficiency by orchestrating resource utilization at the GPU level. Mixed-quality models refers to the availability of multiple models with different qualities in accuracy and resource needs. “Creating a mixture of model variants (i.e., a mixture of low- and high-quality models) provides an opportunity for significant reduction in the carbon footprint” by allowing the best model variant to be orchestrated at runtime, trading off accuracy against carbon savings. CLOVER’s mixed-quality inference services combined with GPU partitioning has proven highly effective, yielding “over 75% carbon emission savings across all applications with minimal accuracy degradation (2-4%)”.
Orchestration from a portfolio of mixed-quality models brings tremendous promise. Imagine intelligently trading off energy versus accuracy at runtime based on real-time requirements. As a further example, it’s been shown that “the answers generated by [the smaller] GPT-Neo 1.3B have similar quality of answers generated by [the larger] GPT-J 6B but GPT-Neo 1.3B only consumes 27% of the energy” and 20% as much disk (Everman et al. 2023). Yet another impactful approach to orchestration by cascading mixed-quality LLMs was shown in FrugalGPT (Chen et al. 2023). A key technique in FrugalGPT was to use a cascaded architecture to avoid querying high-resource-demand GPT-4 as long as lower-resource-demand GPT-J or J1-L were able to produce high-quality answers. “FrugalGPT can match the performance of the best individual LLM (e.g. GPT-4) with up to 98% cost reduction or improve the accuracy over GPT-4 by 4% with the same cost”.
Also notable in orchestrating machine learning inference is Kairos (Li et al. 2023). Kairos is a runtime framework that enhances machine learning inference by optimizing query throughput within quality and cost constraints. It achieves this by pooling diverse compute resources and dynamically allocating queries across that fabric for maximum efficiency. By leveraging similarities in top configurations, Kairos selects the most effective one without online evaluation. This approach can double the throughput of homogeneous solutions and outperform state-of-the-art methods by up to 70%.
From an innovation standpoint, a market opportunity exists for providing optimized AI orchestration to the data center. Evidence of the importance of AI orchestration may be seen in NVIDIA’s recent acquisition of Run:ai, a supplier of workload management and orchestration software.
Reducing the complexity of mathematical operations is a key strategy for decreasing AI’s computational load and energy consumption. AI typically relies on 32-bit precision numbers within multi-dimensional matrices and dense neural networks. Transformer-based models like ChatGPT also use tokens in their processing. By minimizing the size and complexity of numbers, matrices, networks and tokens, significant computational savings can be achieved with minimal loss of accuracy.
Quantization, the process of reducing numerical precision, is central to this approach. It involves representing neural network parameters with lower-precision data types, such as 8-bit integers instead of 32-bit floating point numbers. This reduces memory usage and computational costs, particularly for operations like matrix multiplications (MatMul). Quantization can be applied in two ways: post-training quantization (PTQ), which rounds existing networks to lower precision, and quantization-aware training (QAT), which trains networks directly using low-precision numbers.
Recent work with OneBit (Xu et al. 2024), BitNet (QAT: Wang et al. 2023) and BiLLM (PTQ: Huang et al. 2024) have shown the efficacy – delivering accuracy while reducing memory and energy footprint – of reduced bit-width approaches. BiLLM, for example, approximated most numbers with a single bit, but utilized 2 bits for salient weights (hence average bit-widths > 1). With overall bit-widths of around 1.1, BiLLM was able to deliver consistently low perplexity scores despite its lower memory and energy costs.
Reinforcing the potential for 1-bit LLM variants is BitNet b1.58 (Ma et al. 2024), “where every parameter is ternary, taking on values of {-1, 0, 1}.” The additional value of 0 was injected into the original 1-bit BitNet, resulting in 1.58 bits in the binary system. BitNet b1.58 “requires almost no multiplication operations for matrix multiplication and can be highly optimized. Additionally, it has the same energy consumption as the original 1-bit BitNet and is much more efficient in terms of memory consumption, throughput and latency compared to FP16 LLM baselines”. BitNet b1.58 was found to “match full precision LLaMA LLM at 3B model size in terms of perplexity, while being 2.71 times faster and using 3.55 times less GPU memory. In particular, BitNet b1.58 with a 3.9B model size is 2.4 times faster, consumes 3.32 times less memory, but performs significantly better than LLaMA LLM 3B.”
Ternary neural networks (Alemdar et al. 2017) that constrain weights and activations to {−1, 0, 1} have proven to be very efficient (Liu et al. 2023, Zhu et al. 2024) because of their reduced use of memory and ability to eliminate expensive MatMul operations altogether, requiring simple addition and subtraction only. Low-bit LLMs further lend themselves to implementation in (high-performance) hardware, with native representation of each parameter as -1, 0, or 1, and simple addition or subtraction of values to avoid multiplication. Indeed, the Zhu work achieved “brain-like efficiency”, processing billion-parameter scale models at 13W of energy (lightbulb-level!), all via a custom FPGA built to exploit the lightweight mathematical operations.
Low-Rank Adaptation (LoRA; Hu et al. 2021) is a technique designed to fine-tune large pre-trained models efficiently by focusing on a low-rank (lower-dimensionality) approximation of the model’s weight matrices. Instead of updating all the model parameters, LoRA introduces additional low-rank matrices that capture essential adaptations while keeping the original weights mostly unchanged. This approach reduces computational costs and storage requirements, making it feasible to adapt large models to specific tasks with limited resources. “LoRA can reduce the number of trainable parameters by 10,000 times and the GPU memory requirement by 3 times.” In addition, LORA provides better model quality “despite having fewer trainable parameters, a higher training throughput, and, unlike adapters, no additional inference latency”.
Neural network pruning achieves computational efficiency via the technique of reducing the size of a trained neural network by removing unnecessary connections (weights) or even entire neurons, thus making the network smaller, faster, and more efficient without significantly sacrificing performance. The concept was first introduced in the paper “Optimal Brain Damage” (Le Cun et al. 1989), and has been much advanced (Han et al. 2015) in the recent past. The efficiencies reported in Han et al.’s work were substantial: “On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9×, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the total number of parameters can be reduced by 13×, from 138 million to 10.3 million, again with no loss of accuracy.”
Knowledge distillation is a technique for reducing neural network size by transferring generalized knowledge from a larger “teacher” model to a smaller “student” model, perhaps similar to the Hoskins Effect in virology. This process involves distilling the teacher model’s probability distributions into the student model, which results in a more compact network that maintains high performance at lower resource costs. Knowledge distillation has proven effective in tasks such as image identification (Beyer et al., 2021), pedestrian detection (Xu et al., 2024), and small language models. For instance, NVIDIA and Mistral AI’s Mistral-NeMo-Minitron 8B achieved superior accuracy compared to other models by combining neural network pruning and knowledge distillation, despite using orders of magnitude fewer tokens.
Small language models (SLMs) also offer a method to reduce computational load and energy consumption. While SLMs are often discussed in the context of on-device applications, such as those from Microsoft and Apple, they also decrease computational and energy demands in data center environments. SLMs are characterized by smaller datasets, fewer parameters, and simpler architectures. These models are designed for low-resource settings, requiring less energy for both inference and training. Research (Schick & Schütze 2020) indicates that SLMs can achieve performance comparable to GPT-3 while having orders of magnitude fewer parameters.
Another notable optimization approach is SPROUT (Li et al., 2024), which reduces transformer math and carbon impact by decreasing the number of tokens used in language generation. SPROUT’s key insight is that the carbon footprint of LLM inference depends on both model size and token count. It employs a “generation directives” (similar to compiler directives) mechanism to adjust autoregressive inference iterations, achieving over 40% carbon savings without compromising output quality.
One last method for reducing AI’s computational load that brings promise is neuro-symbolic AI (NSAI; Susskind et al. 2021). NSAI integrates neural networks with symbolic reasoning, combining the strengths of both: neural networks excel at pattern recognition from large data sets, while symbolic reasoning facilitates logic-based inference. This integration aims to overcome the energy demands of neural networks and the rigidity of symbolic systems, creating more robust and adaptable AI. Research indicates that NSAI can achieve high accuracy with as little as 10% of the training data, potentially representing a pathway to sustainable AI.
It should be noted that pushing AI computation to the edge, e.g., onto your mobile device, does have the effect of reducing the 40% of data center energy that’s currently spent on cooling. Energy and carbon impacts are however still incurred from charging your mobile device.
The battle for AI chip supremacy is being fought equally on the silicon as well as on the software framework that’s built atop the silicon. These frameworks increasingly provide native support for the sorts of math optimizations that have been described in this article. Pruning, for example, is one of the core optimization techniques built into TensorFlow MOT.
NVIDIA competitor, AMD, has been aggressively accreting software framework technology via the acquisition of companies such as Mipsology, Silo.ai and Nod.ai. All in aid of countering the significant advantages brought to NVIDIA’s hardware by its extensive software technology, including its CUDA parallel programming platform and NIM (NVIDIA Inference Microservices) framework.
In NVIDIA’s recent work published together with Hugging Face, the full impact of turning NIM on was seen in the 3x improvement in tokens/second performance. Note the functionality embedded within NIM.
Modern software applications, from lightweight mobile apps like Instagram to complex systems such as automobile operating systems, can encompass 1 million to 100 million lines of code. This underscores the necessity of integrating energy-awareness into software design from the outset, treating it as an architectural attribute alongside scalability and latency. Neglecting this early integration will otherwise result in a challenging / insuperable retrofitting process for energy-efficiency at some later point, in what will eventually become a large, legacy application.
Key architectural strategies include simplifying code through pruning layers and nodes, reducing instruction counts, training with minimal data as in SLMs, and employing techniques like RAG and p-tuning to minimize training overhead. Additionally, incorporating drift tolerance, zero-shot and transfer learning, optimizing job schedules, and carefully selecting cloud computing resources are essential practices.
Of salient importance is also the requirement of measuring the climate impacts of AI models. Per the old saw, you can’t improve what you can’t measure. The tools for monitoring the energy footprints from AI are many, readily supplied by cloud vendors such as Amazon, Google, Microsoft and NVIDIA. As well, there are multiple third-party solutions available from the likes of Carbontracker, Cloud Carbon, PowerAPI, CodeCarbon, ML Commons and ML CO2 Impact.
Watt’s Up? Policies!
“Into the corner, broom! broom! Be gone!” from Goethe’s The Sorcerer’s Apprentice
AI is becoming ever more ubiquitous, leading to ever larger demands on our straining power grid. Despite all of the measures that can be taken to dampen the power and environmental impacts of AI, such as the methods described here, AI technology is fighting a losing battle with itself. The International Energy Agency recently found that “The combination of rapidly growing size of models and computing demand are likely to outpace strong energy efficiency improvements, resulting in a net growth in total AI-related energy use in the coming years.” This conclusion mirrored one that was reached by researchers from MIT (Thompson et al. 2022), which stated “that progress across a wide variety of applications is strongly reliant on increases in computing power. Extrapolating forward this reliance reveals that progress along current lines is rapidly becoming economically, technically, and environmentally unsustainable”.
What can we do? The answer lies in governing our actions through policies at three levels: as individuals, as corporations, and as polities. Reducing AI energy demand stands as both a moral imperative and a sound business practice.
To mitigate AI’s climate impact, regulatory measures are inevitable. Following the 1973 oil crisis, the precedent set by 1975’s Corporate Average Fuel Economy (CAFE) standards, which mandated fuel efficiency for U.S. automobiles, demonstrated the effectiveness of energy regulations. Complemented by state-level gasoline taxes, these standards have continued to drive consumers towards more environmentally-friendly, fuel-efficient vehicles.
In the face of our burgeoning climate crisis we can expect similar regulations addressing carbon impacts globally. Recent examples include Denmark’s carbon emissions tax on livestock and California’s Senate Bill 253 (SB 253). The urgency of climate change necessitates robust legislative responses worldwide.
Historically, the 1973 oil crisis favored companies that had already adopted energy-efficient technologies, notably the Japanese auto industry, while the U.S. auto industry, reliant on less efficient vehicles, struggled to recover its industry dominance (Candelo 2019, Kurihara 1984). This underscores the benefits of early adoption of energy efficiency.
California’s SB 253, which requires corporations with revenues over $1 billion to disclose greenhouse gas emissions, is a positive step but could be improved. A broader reporting threshold, similar to the $25 million revenue threshold of the 2018 California Consumer Privacy Act, would be more effective. Greenhouse gases are pollutants, and given the gravity of the climate crisis, we must consider the impact of AI, including from companies with less than $1 billion in revenue.
Smaller technology companies might argue that compliance with SB 253’s reporting requirements is burdensome. However, integrating energy efficiency from the start — like the early adoption seen in Japanese automobiles prior to 1973’s oil crunch — offers competitive advantages. As climate constraints increase, energy-efficient products will be more viable, making early compliance beneficial.
Regulation akin to CAFE standards for AI is likely forthcoming in every jurisdiction worldwide. Start-ups that adopt energy-efficient practices early will be better prepared for future regulations and market demands. Additionally, energy-efficient AI products are more cost-effective to operate, enhancing their appeal to business customers and supporting long-term growth.
Corporate AI policies should prioritize employee education on climate issues to build a knowledgeable workforce capable of advancing sustainability. Product design must incorporate environmental considerations, and operational expenditure (e.g., selecting a cloud service provider) should focus on minimizing ecological impact. Accurate measurement and reporting of environmental metrics are essential for transparency and accountability. Companies should also anticipate future regulatory requirements related to climate impacts and design products to comply proactively. Finally, corporate policy requires avoiding methods that do not yield tangible carbon impact. Adopting these practices will support environmental sustainability and enhance positioning within an evolving regulatory framework.
For AI policies at the individual level, we should all remain cognizant of the environmental impacts associated with artificial intelligence. It’s important to use AI technologies judiciously, recognizing both their potential benefits and their contributions to climate change. Furthermore, sharing this awareness with others can help amplify the understanding of AI’s climate implications, fostering a broader community of informed and responsible technology users. By integrating these personal policies, individuals can contribute to a more sustainable approach to AI utilization.
In Goethe’s poem “The Sorcerer’s Apprentice”, the apprentice’s reckless use of magical powers without sufficient understanding or control leads to chaos and disaster, as “autonomous AI” brooms flood the house with water beyond the apprentice’s ability to manage. This allegory resonates with the contemporary challenges of AI automation. Just as the apprentice’s unchecked use of magic brings unforeseen consequences, so too can the unregulated deployment of AI technologies result in unintended and harmful climate outcomes. Goethe’s poem underscores the necessity of constraining and governing powerful tools to prevent them from spiraling out of control. Effective oversight and regulation are crucial in ensuring that AI, like the sorcerer’s magic, is harnessed responsibly and ethically, preventing the potential for technological advances to exacerbate existing issues or create new ones.
The post Reducing AI’s Climate Impact: Everything You Always Wanted to Know but Were Afraid to Ask appeared first on UC Berkeley Sutardja Center.
Kevin Kung, co-founder of Takachar, began his climate innovation journey at UC Berkeley’s Blum Center through the 2015 Big Ideas Contest. Now hosting California Climate Action Fellows, Takachar helps transform agricultural waste into bioproducts, emphasizing a human-centered approach that addresses environmental challenges and supports underserved communities.
The post How a Berkeley alum and a Climate Action Fellowship partner fight for a sustainable future through human-centered engineering appeared first on Blum Center.
When you visit the public university that created the highest number of venture-funded startups in the world, you expect it will not happen by chance. And yet, uncovering scientifically the hows and whys is as frustrating as ambitious; it’s elusive and somehow feels like magic, despite the tangible evidence. But these aren’t just academic questions – they cut to the heart of how Berkeley consistently turns ideas into action, and how entrepreneurial universities worldwide could “make it happen” as well.
On the flyer, the Sutardja Center for Entrepreneurship and Technology (SCET) appears rather confident in their portfolio of entrepreneurship education programs: “We will make sure you will learn the mindset and behaviors that drive success”. And of course, that’s a catchphrase, but I can’t help but academically appreciate how specific is the expectation they set. Indeed, the reality is that we can nurture entrepreneurial capabilities, but the creation of solid startups is just a different and imperfect game.
During the semester I spent at SCET, I was privileged to witness how they orchestrated their entrepreneurship education programs to impact the students. But throughout the entire period, I always felt like lingering around the White Rabbit hole, sniffing at the outline and occasionally sticking my head in to look deeper. Many students described their separate entrepreneurial journeys as “being thrown into the rapids of a whitewater and emerging with concrete opportunities”. Where does this success take shape? It is evident that the Berkeley brand and Silicon Valley play an important role, but in my narration and partial opinion I can already spoil that the most shared and culturally appropriate idea I gathered was that Berkeley makes it happen by “allowing for serendipity” – I will now take three different perspectives to unravel such magic.
The main focus of my research at SCET converged on the role of their educators in delivering effective entrepreneurship programs. Academically, the pedagogical landscape is divided into a more traditional education “about and for” entrepreneurship and a more recent education “through” entrepreneurship. The empirical observation is that entrepreneurship programs worldwide are increasingly blurring these boundaries, emphasizing the more general idea of deploying experiential and transformative learning journeys. As such, entrepreneurship education programs range from innovating on existing products through design thinking processes (challenge-based learning, CBL) to gamifying the creation of actual ventures (venture creation programs, VCPs); but the most prominent feature is the engagement of external actors like users, mentors, industry fellows, venture capitals etc. that challenge students to validate their learnings.
SCET embodies such experiential entrepreneurship education in each course and summarizes it as the “Berkeley Method of Entrepreneurship” (BMoE), highlighting the focus on the entrepreneurial mindset. The most straightforward example is the (BMoE) bootcamp that happens right before the beginning of the semester, where students and professionals work together for 5 days in learning-by-doing the fundamentals of venture creation. Then, the portfolio of programs varies, ranging from the horizontal Collider Labs, which offer different tracks for solving innovation challenges, to the more vertical courses like Technology Entrepreneurship or Startup Catalyst, which concretely support venture development, and other formats in the between like the Newton’s Lecture Series, which allow students to interact with distinguished innovators.
Modern entrepreneurship education appears difficult and it requires structures and frameworks of reference, but providing an experiential learning journey that includes all those external actors completely breaks the classroom’s fourth wall. Here, educators cannot be just teachers but must encompass many roles, such as facilitators, coordinators, project managers, or boundary spanners. Guess which other job has the same requirements? Well, one of SCET’s secrets is that most of its entrepreneurship educators are current entrepreneurs, and this works brilliantly, also because students appear to learn better from those who have already faced the hurdles of developing a startup. These people are passionate, competent, empathetic, independent, and able to manage the daily tensions of engaging multiple actors. “It is overwhelming unless you have already been a CEO for five startup companies… you learn to become very efficient, otherwise you can’t run a company.” However, this comes with a price: you’re dealing with uniquely self-driven entrepreneurs, and sometimes they reverberate their personalities and beliefs into the entrepreneurship program. “I didn’t want to run or coordinate a course: I wanted to teach it my way. And my superior knows me well enough that he knows the best way to help me perform is to get out of the way and just let me do it.”.
Overall, when looking at the big picture of success as the number of student startups, we lack acknowledgement of how effective the entrepreneurship education programs are; in delivering experiential journeys, courses become projects or entrepreneurial ventures even, and success is a much more nuanced concept that is reinterpreted locally by each educator.
“What I really enjoy out of the classes is to take the ones that have potential but are hesitant, or don’t have the confidence, culturally or whatever, and get them to cross that line.”
“I believe a good class is where you’re a different person after taking it, and the best way to see if this is actually useful is if you have been able to apply some of these skills if you are actually activating some change for yourself and also outside.”
Throughout my interviews at SCET, entrepreneurship educators have shared how they developed their programs both as singular entities and as part of a collective entrepreneurship offering. This was central to understanding the idea of success: does the “make it happen” actualize in specific programs or is it a compounded effect along a longer journey? As far as I could tell, programs had very different origins and development journeys, but semester after semester SCET reassessed and increasingly glued them in an overall offering or portfolio. This usually happens in most universities, only with different dynamics and conditions; on the surface, one could not simply say that Berkeley’s current entrepreneurship offering is better than others.
Such an act of bricolage is a difficult endeavor as we need to ensure that programs work by themselves while designing how and if they connect to serve specific purposes. We need a strong sensibility to what happens within the single programs and how external initiatives can serve them best. Although the research I focus on states that we need to properly design entrepreneurship education programs, SCET purposely stayed out of the specific programs and only focused on serving them by building a platform.
One example is represented by the challenge-based learning courses offered as “Collider Labs”, which are not developed by SCET itself. These courses belong to independent research entities working on separate topics like alternative meat or environmental disasters, and that thrive on a delicate internal balance between research and teaching. SCET offered to structure and gather some of their teaching activities under the umbrella of Collider Labs, and this resulted in a win-win situation where the research entities and SCET reciprocally enlarged their networks and entrepreneurship offerings. Another example is the Startup Semester, an international cohort of students who have the opportunity to navigate SCET’s offering of entrepreneurship courses and programs to develop their startup, catering additional network to the ecosystem.
Therefore, at SCET allowing for local development of the entrepreneurship education programs is extremely intentional and encouraged. Then, the Center is mainly focused on building two other things: a cultural North Star (the BMoE) and complimentary services. It is immediate to mention how its entrepreneurship offering serves and is served by other activities such as professional programs and global partnerships with other universities. Again, each of these activities developed as microcosms out of opportunity recognition (e.g. “we have a bunch of international friends”), but SCET has more control over them compared to the entrepreneurship education programs, which have to stay flexible by definition.
I will now enter the White Rabbit hole to start drafting my partial conclusion. I mentioned earlier that the local population, when faced with the straightforward question, just highlighted the idea of Berkeley allowing for serendipity. Which recursively clashes with the initial “make it happen” idea that I described; are we implying intentionality? The answer may lie somewhere in the middle: perhaps Berkeley doesn’t merely allow for serendipity, it designs for it.
SCET is explicit about this model that it calls the “Innovation Collider”, which translates to how the infrastructure and culture crafted at SCET and beyond lay down a fertile ground where unplanned collaborations between a multitude of actors are more likely to happen. Entrepreneurship educators, supported by SCET, play a pivotal role in nurturing and curating their local experiential educational journey, which converges towards the collider as a whole.
Intentionally or not, micro-managing or not, the structure holds and SCET is centrally positioned to host hundreds of brilliant students in an intricate network of alumni, mentors, industry partners and investors, and other activities offered by other Berkeley departments. And still, is this the best configuration for university’s entrepreneurship centers worldwide? How did Berkeley as a whole manage to create all those venture-funded startups? We can see the beehive, but not the dance.
At one point it goes down to the people. Indeed, beyond the deliberate choice of hiring entrepreneurs, many have also recognized that SCET is just always able to choose competent and responsible people. Then, these people gather confidence and ownership of their programs, and to improve them semester after semester they put into practice feedback loops. Boldly enough, systemic serendipity originates from locally developed feedback loops.
“I have an expert team of teaching assistants that have taken this class and now have been through so many bootcamps that they can identify issues. They know what was magical for them, and now that they get to be the ones that deliver it, they just take the initiative – it’s really cool how they are raising the bar.”
“We have a team of volunteer ambassadors, who are students with great energy who felt activated by our program and wanted to engage more. For me, they’re like my focus group of highly activated students, a listening group. So they table with me, participate in panels, interact with prospective students, produce graphics, and so on, but it’s on their volunteering time.”
Existing frameworks fall short of capturing the essence of such a multifaceted reality. There’s no business model, Berkeley brand, or Silicon Valley; serendipity is a happy accident delicately crafted as an outcome of myriad feedback mechanisms operating within Berkeley and SCET’s entrepreneurial ecosystem. Feedback loops are just one bottom-up mechanism – listening, adapting, evolving -, but both practice and research must look at this level of granularity to be able to uncover how we can orchestrate the emergence of innovation and entrepreneurship in universities — how we make it happen.
“If I hear another framework, I’m gonna puke, because it’s not about frameworks; people write on all these papers about frameworks and stuff, but at the foundation, it’s empathy about everybody. It’s really about being there and being aware of the whole time.”
The post Serendipity by Design: How Berkeley Systemically Fosters Entrepreneurial Success appeared first on UC Berkeley Sutardja Center.
Minjoo Sur (B.S. Electrical Engineering and Computer Science ‘18), co-founder and CEO of Huddle, joined the UC Berkeley SCET community after transferring to UC Berkeley in the fall of 2016. Since then, she has worked as a software engineer at Salesforce. Over the past year, she has furthered her mission to help people stay motivated to achieve their goals by developing her startup, Huddle, a platform designed to promote accountability and community in individuals’s journeys of self-improvement and discovery. We followed up with Minjoo Sur to learn more about her entrepreneurial journey, the origins of Huddle, and where she is headed next.
Minjoo Sur always knew she wanted to develop her software startup eventually. When she transferred to UC Berkeley in 2016, she pursued a degree in Electrical Engineering and Computer Science to establish a strong technical foundation. After deciding to expand her business acumen, she enrolled in ENGIN 183E 001 – Technology Entrepreneurship taught by Professor Naeem Zafar. Minjoo noted that this particular class stood out to her as having the greatest impact on her personal and professional development. At the time, Minjoo described herself as a student with deep passion but no pertinent knowledge. She felt energized and inspired by the high-intensity but rewarding demands of the class, and she grew more confident in her pitching and communication skills. Minjoo notes that the highlight of the course was her first of many pitches – it was the first time she envisioned herself as an entrepreneur, beyond the classroom. She said, “I could really see myself being an entrepreneur. It was not only educational but practical.”
Following her graduation from UC Berkeley in 2018, Minjoo spent the next five years working in a software engineering role at Salesforce. Though she deeply enjoyed her work, she revealed that this role also allowed her the flexibility to dedicate more time to her side hustles.
After experiencing a difficult loss in the family, Minjoo questioned her purpose – what she wanted to do during her lifetime, and what impact she wanted to have on the world.
To Minjoo, entrepreneurship isn’t just about turning a profit – it’s about solving societal problems, catalyzing positive cultural shifts, and embarking on a fulfilling journey of self-discovery. During this difficult time, she read several books to find a reason and purpose in her life pursuits. One book, The Almanack of Naval Ravikant: A Guide to Wealth and Happiness, imparted a profound message: to build a successful company, one must dedicate oneself wholly to an issue they are uniquely positioned to address, where their contributions become indispensable and irreplaceable. This alignment is essential not only to building a successful business but also to finding fulfillment and happiness along the way.
Minjoo reflected on finding the intersection between something she loved to do and a societal problem. She deeply enjoyed motivating others to embrace opportunities for personal growth and observed that her peers struggled to hold themselves accountable for their goals. Minjoo then focused on finding co-founders and got to work building her startup, Huddle.
“Huddle came to my mind naturally after I found the common ground of my purpose, passion, and my talent: self-improvement, motivating others, helping others understand themselves, connecting people, positivity, creativity, and empathy.”
Even in the earliest days of developing Huddle, Minjoo felt confident entering a new world of uncertainty. Equipped with the entrepreneurial acumen gained from SCET classes, industry experience, and strong technical background, she felt prepared and driven to build a platform aiming to transform the way people go about achieving their goals. More specifically, Huddle currently helps members with ADHD find connections with accountability partners with similar goals. Users can connect with partners to help them stay on track, measure their progress, and foster a sense of camaraderie along the way.
Today, Minjoo and her two co-founders are working full-time to grow Huddle. They have launched their MVP on Slack and are currently focusing on crafting their go-to-market strategy. They have sixteen paying users in the San Francisco area, and they have received hundreds of requests to open more spots outside of the Bay Area. In the future, they are looking to scale their company, and they hope to deepen relationships among users through hosting in-person community events.
“Our vision is to create a world where every person has a support system to become the best version of themselves. We want to make a self-growth journey less lonely and more inspiring by connecting people who have similar life goals to grow together.”
The post Meet Minjoo Sur, the Entrepreneur Helping People Stay Motivated appeared first on UC Berkeley Sutardja Center.
This summer, I participated in the Global Entrepreneurship and Innovation Program in Europe, a month-long study abroad experience held in Segovia, Spain, and Porto, Portugal. The program was divided into two parts. The first week, known as Berkeley Leadership Week (ENGIN 183B: Berkeley Method of Entrepreneurship), was an experiential, gamified experience held at IE University in Segovia, Spain designed to instill in us invaluable leadership lessons and hone our entrepreneurial mindsets. In the latter three weeks, our Berkeley cohort joined the European Innovation Academy in Porto (ENGIN 183C: Challenge Lab), a world-class program centered around tech entrepreneurship. This fast-paced, intensive program was an enlightening experience and allowed me to forge lifelong connections with peers around the world. In just 15 days, we learned to take a startup idea from inception to a refined final pitch, which we delivered to a panel of seasoned entrepreneurs and investors.
Here are my top 8 takeaways from my study abroad experience!
As SCET Managing Director & Chief Learning Officer Ken Singer put it, “Entrepreneurship is the ultimate team sport.” A strong and resilient team is the backbone of any successful venture. At the end of the day, it’s the team that can be either a startup’s competitive edge or its Achilles’ heel. Your team might have the most disruptive technology or most innovative business model, but if the team doesn’t work, the idea can’t succeed. Successful teams are characterized by diversity and balance across skills and personalities, all united in a shared vision, values, and commitment. Ideas are only secondary to the makeup of the team, and founders must be able to demonstrate to investors that they are the right people to bring an idea to life.
Remember to surround yourself with teammates who fill your weaknesses, and choose your teammates wisely!
Throughout our time at the European Innovation Academy, we had the opportunity to seek mentorship from seasoned professionals across a diverse array of industry expertise, including business development, marketing, and design. Effective mentorship can be a cornerstone to the success of a budding startup, serving as a guide for new founders to navigate uncharted waters. Mentors can play a variety of roles within a startup – whether it be helping to identify problems, exploring different options, providing deep industry expertise, facilitating access to others, or serving as a sounding board for founders. However, it is imperative to remember that the goal is to seek guidance from mentors, not answers. While mentors might not have the “right answers”, they can provide valuable insights that can steer founders in a promising direction.
In the startup world, failure and rejection are inextricable parts of the journey. These experiences, though frustrating, can be great learning experiences that every entrepreneur should embrace. This is not easy, as we’ve been conditioned to associate failure with negative emotions. It’s often in the difficult moments that clarity can be achieved – it’s better to recognize early on that a particular idea or approach is a dead end, rather than investing significant time and capital into it.
Another thing I learned is that perfectionism is the enemy of progress. When building a startup, effective and speedy execution must take precedence over delivering a “perfect” product. The truth is, endless iteration and feedback are required to achieve success.
One of the most recurring pieces of feedback received from our professors, mentors, and lecturers was the importance of developing a solution only after gaining a comprehensive understanding of the problem. Only after a thorough customer discovery and validation process should we begin developing a solution. Early-stage founders must avoid two common pitfalls: creating a solution without a well-defined problem and falling in love with the idea. In the first week of the European Innovation Academy, our teams were instructed to spend significant time with our potential customers to learn as much as possible about their current behaviors, priorities, pain points, attitudes, and lifestyles to find product-market fit. After each iteration, it’s crucial to maintain close contact with the customer to keep on improving the business.
During this program, we were able to meet hundreds of new people. Not only did I develop friendships with my sixty fellow students from Berkeley but I was able to meet numerous students from all over the globe at the European Innovation Academy. One of the most meaningful aspects of studying abroad was engaging with different cultures, traditions, and lifestyles. My five-person team was composed of members from three different continents. Collaborating with a diverse group of people enhanced the flow of ideas, increased our creativity, and improved our communication skills. While I learned much from daily lectures and workshops, I found that I learned the most from my peers.
Contrary to what we’re often conditioned to think, conflict is a good thing. During our time at the European Innovation Academy, my team experienced a fair share of conflicts. However, because we established a culture of open-mindedness early on, each team member felt empowered with honesty and compassion without worry. We were required to develop a co-founder agreement, outlining our strategies for conflict resolution. Embracing conflict fosters healthy discussion and discovery, enabling teams to establish a culture that values diverse viewpoints. Team conflict can facilitate better problem-solving, provide clarity, and spur creativity and innovation. It’s important to remember that healthy tension and communication are important to making progress.
Active listening is one of the most important skills for entrepreneurs, especially at the earliest stages of a venture. It’s a founder’s job to listen and learn from the people around them whose voices matter most – whether it be your customers, mentors, or teammates. In the customer discovery process, I learned that the most effective approach is to simply focus on learning about the customer. Pitching or selling from the get-go, talking more than listening, or prompting the customer with leading questions can hinder a founder’s ability to unlock the most promising solutions. Equally important is listening attentively to each team member, especially the members who tend to be more introverted – it’s often these individuals who have insights or information that can propel a team forward.
Finally, I learned that leadership is not a one-size-fits-all concept, especially in entrepreneurial environments where innovation often requires a shift away from more traditional methods. There is a difference between being a good leader and being a good manager: a leader inspires action beyond what people believe themselves to be capable of, not simply delegating tasks. It’s also about learning to optimize yourself for the moment – being adaptable and serving wherever your team needs you most. Setting the right tone and creating a positive culture is fundamental to success. By modeling the way, inspiring a shared vision, and encouraging the heart, a leader can lead their startup to success. After all, investors are not only betting on your idea – they’re betting on YOU to be a stellar leader.
The post 8 Takeaways from UC Berkeley’s Largest Startup Summer Program appeared first on UC Berkeley Sutardja Center.
Berkeley, CA, Aug 27, 2024 — On November 11-12, 2024, Berkeley Engineering’s Sutardja Center for Entrepreneurship & Technology (SCET) will launch the AI for the C-Suite program, a premier executive education course designed to equip senior leaders with strategic insights into leveraging artificial intelligence (AI) within their organizations. This in-person AI strategy workshop, held on the UC Berkeley campus, will be led by an exceptional faculty of AI pioneers and industry innovators who have shaped the future of AI.
Berkeley Engineering, consistently ranked among the top engineering schools globally, including the #3 U.S. undergraduate and graduate engineering program, is proud to offer this advanced AI strategy training tailored for C-suite executives. The AI for the C-Suite program is a unique opportunity for senior leaders to engage directly with world-renowned AI experts and gain actionable insights to drive AI transformations in their businesses.
Participants will have the chance to learn from Berkeley’s distinguished faculty, including:
UC Berkeley is ranked #1 globally by Pitchbook for producing venture-backed startups, a testament to its deep commitment to innovation and entrepreneurship. At the heart of this success is Berkeley’s Sutardja Center for Entrepreneurship & Technology (SCET), which has been instrumental in nurturing groundbreaking startups and fostering an entrepreneurial mindset. The AI for the C-Suite program reflects this mission by empowering executives to lead their organizations through AI-driven transformations, equipping them with the tools and frameworks needed to develop and implement strategic AI initiatives.
This executive AI course is not just about learning AI concepts; it’s about applying them to business strategy. With hands-on workshops and direct access to industry-leading AI practitioners, participants will gain practical insights that can be immediately applied to their organizations. The program covers everything from AI strategy development to the ethical implications of AI in business.
“AI is transforming every industry, and leaders need to be equipped with the knowledge to navigate these changes. The AI for the C-Suite program provides an unparalleled opportunity to learn from the best in the field and apply those insights to real-world business challenges,” said Pieter Abbeel, AI Pioneer and Co-Director of the Berkeley Artificial Intelligence (BAIR) Lab.
“Berkeley Engineering’s commitment to advancing technology and entrepreneurship is embodied in this program. We are excited to bring together such an esteemed group of faculty to help executives lead with AI,” said David Gallacher, Business Strategy Expert and Industry Fellow at SCET.
The Sutardja Center for Entrepreneurship & Technology (SCET) at UC Berkeley is a global leader in technology innovation and entrepreneurship education. SCET offers a range of programs that empower professionals and executives to turn innovative ideas into successful ventures. With a focus on hands-on learning and real-world application, SCET is at the forefront of producing the next generation of tech leaders.
Kristina Susac
Head of Professional Programs
susac@berkeley.edu
https://scet.berkeley.edu/professional-programs/
The post Berkeley Engineering Launches AI Program for Execs appeared first on UC Berkeley Sutardja Center.
Collider Cup XIV finalist Peyton McQueen first got the idea for her groundbreaking product, Aqua AI, from a podcast.
“Someone described Cal [as] having perfected the ‘formula’, referring to the way in which training is given to swimmers,” McQueen explained. “This immediately opened my eyes to how almost every aspect of swimming is dictated by math — amount of yardage, intervals, stroke count, the physics of an efficient stroke.”
Once McQueen discovered that swimming follows a mathematical formula, she realized there must be a perfect ‘solution’ that swimmers can train towards. Eager to explore this idea, she began recruiting data science interns to research the potential of AI in swimming, backed with her own lifetime experience as a swimmer and team manager.
“Swimming is a sport where your head’s in the water. It’s all about feel,” McQueen said. “What I provide for the swimmers almost every single day [as a manager] is film. Me and a handful of interns will film them above water and underwater, and then it is uploaded to a Google drive and meticulously organized so that it’s very easy for them to see their progress.”
It was through this footage that McQueen began to test video analysis technology for the Cal swim team. Within a year, she decided to take the product’s success even further with SCET.
“When I started the Sports Tech class this past spring, it enlightened me to see a real marketable business that would combine what I’ve already done for the [Cal] team and also progress the sport of swimming as a whole,” McQueen reflected. “Going to the Collider Cup, I got to see how this concept resonated with investors, including outside the swimming community. With the development of the model after that, we can produce unimaginable changes in the sport as a whole.”
McQueen stated that marketing her product to a wider audience is a key hurdle to clear before Aqua AI can grow further.
“If the product is going to distract the coach from the work that needs to be done, they’re not going to use it,” she said. “Creating something that would transition smoothly into their own program is what we’re looking for, and what [the Sports Tech class] helped me to formulate.”
But before expanding the company, McQueen wants to focus this summer on improving the product itself with the help of Cal’s swim team.
“I want to solidify the research and choose an investor that is going to be right for us,” she said. “We’re also still building our team. The beauty of the Cal swim team is that not only am I surrounded by the best athletes in the world, but they chose Cal because it is such a prestigious academic institution and they have minds that are just as brilliant. I’m talking to swimmers with backgrounds in engineering and computer science. I’m also working with Coach David Marsh on the swim side of things — he has been my most important mentor throughout my college career and my life.”
Along with Marsh, McQueen mentioned startup expert Mark Searle as a mentor who is currently helping Aqua AI grow. She plans to take Searle’s SCET course “Startup Catalyst: Let’s Speed Up Your Startup” this fall.
Another resource McQueen expressed gratitude for was the Sports Tech class, where she worked to pitch and market Aqua AI with her class team: Tommy Roder (’26), Forrest Frazier (’24), Hank Rivers (’26), Colby Hatton (’26), Isabelle Stadden (’24), Ashlyn Fiorilli (’24), Emily Gantriis (’24) and Stephanie Salesky (’24).
Through SportsTech, McQueen picked up new skills and gained experienced mentors.
“The Sports Tech class is a unique place that combined so many of my interests,” McQueen said. “There are tons of athletes and tech entrepreneurs in the class, which is a very special combination of people with direct perspectives who can create something that we know people in the sports industry will use. I am extremely grateful for all of the advisors I have been connected to — my professor from the class, Christyna Serrano and advisor, Peter Evans — in learning to organize a business model.”
McQueen believes that the strongest reason behind the class’s success is the ever-growing potential for the revolutionary future of sports technology.
“We’re in a country where sports is heavily celebrated,” she said, referencing the U.S. Olympic team as an example. “Sports itself is really exciting, and with the development of AI, I can’t imagine what’s next. It’s easy for people to get excited about something that’s entertainment as well. In another sense, sports tech is so exciting because athletes and people like me know exactly how to market sports. We translate our passion for sports into a passion for business.”
And it’s this passion that propels both McQueen and her team towards excellence. According to McQueen, the attitude of a swim team is just as essential to their success as their technical skill.
“Aqua AI, while it can provide the meticulous formula needed to train swimmers at the elite level, cannot replace a coach because I’ve learned that team culture is needed alongside a perfect training formula,” she said. “The culture at Cal is what sets us apart. The passion and attitude of the swimmers and coaches, combined with this seamless mathematical formula.”
Looking toward the future of her product, McQueen is grateful to be immersed in Berkeley’s culture of both technological and athletic ambition.
“It’s a blessing being so close to such great resources,” she said. “As I said, swimming is a form of math. When I translate this formula using my platform, my biggest dream is that the sport of swimming can be solved.”
The post Solving swimming: Aqua AI founder Peyton McQueen makes waves appeared first on UC Berkeley Sutardja Center.
SimpleCell, the brainchild of Sehej Bindra and Arvind Vivekanadan, was developed in the Spring 2024 SCET course titled ENGIN 183C 002 – Challenge Lab: Transforming Brain Health with Neurotech | A Berkeley Changemaker Course. By the end of May, the team earned second place at the Collider Cup and was recently accepted into Forum Ventures.
SimpleCell is a platform integrating large language models to streamline bioinformatics research, allowing researchers to approach large clinical datasets efficiently and accurately. We followed up with Sehej and Arvind to learn more about their journeys and the evolution of SimpleCell, from its inception to the present.
Both Sehej and Arvind recently graduated with the class of 2024. Sehej and Arvind earned degrees in Biochemistry and Electrical Engineering Computer Science, respectively. It wasn’t until their final semester in the Neurotech Collider Lab course that their journeys intersected.
When Sehej started at UC Berkeley, he originally planned to follow a pre-med path to pursue a career in biological research and academia. However, his freshman year was online, and he was not able to get wet lab experience due to the pandemic. Instead, Sehej developed his passion for teaching science within his local community, establishing his own brand and tutoring business. He enjoyed being able to provide a valuable service to students in his community, and he found the work deeply meaningful. Sehej developed his business acumen, gaining an understanding of how to scale a business, and he knew that he wanted to continue building educational products that could directly impact other people once he was able to return to campus in person.
Since Arvind began his time at Berkeley, he was determined to build something. He immersed himself in Berkeley’s startup ecosystem, and he joined the Berkeley Venture Capital Club and took on analyst roles to learn more about what makes successful startups. He describes the proximity to Silicon Valley and the unique offerings at UC Berkeley as being instrumental to his development.
Throughout his time at Berkeley, Sehej participated in biological research at a lab at UCSF. It was here that he noticed a critical pain point: medical doctors lacked the formal training to manipulate large data sets efficiently, a task that has grown increasingly important over the past decade. Without the proper computational skillsets, researchers faced significant barriers to interpreting their data accurately and quickly.
As an undergraduate student, Sehej was responsible for the more tedious aspects of data analysis in the lab. Feeling frustrated by current methods, Sehej knew that there had to be a better solution to assist medical researchers in streamlining the data analysis process; he observed that nearly all of the researchers he worked with instinctively turned to ChatGPT to get started. However, ChatGPT is not tailored to the unique challenges of bioinformatics. Sehej identified an opportunity to create a specialized tool to help researchers confidently and accurately work with large datasets.
“What if we just create a better ChatGPT, fine-tuned for bioinformatics?“
Sehej Bindra
When Sehej pitched the idea to the class in January, Arvind strongly identified with the vision and mission, and a partnership was born. Arvind’s computational background and software skillset complemented Sehej’s domain knowledge, and they built a team comprising all undergraduate students.
Reflecting on the mentorship they have received throughout the development of their startup, Arvind and Sehej both recognize their peers as being among the most influential forces in their development. They noted that it was witnessing their peers build successful businesses just one year out of graduating that inspired them, and they noted that the encouragement from their peers has been instrumental to their success.
In reflecting on their proudest moments, Arvind recalled the completion of the first functional prototype as being a particularly remarkable moment for him, as it represented a culmination of all of their research and learnings. For Sehej, the thing that resonates most deeply is that he listened to his instincts despite initial dubious responses.
He said, “I was able to kind of stick to my gut and see this project through and help lead the team and actually create a prototype and then convince these judges that this vision is compelling.”
As Sehej and Arvind continue to develop SimpleCell, they keep their central intent at the heart of their work – to create a solution that helps the average biologist transform drudgery into enjoyable tasks.
Sehej said, “My personal goal with SimpleCell is to create some beautiful software that takes a frustrating task and not only makes it easy, but enjoyable. That’s genuinely what drives me.”
Similarly, Arvind noted that he hopes to make an impact that changes the lives of biologists using a product he brought to life.
He said, “There is a saying that it’s better to have a few people who really love your product than a lot of people that kind of use it. If we can come out of this where we build something that actually changes lives every day by using our product, then I would consider that a win.”
The post Meet the Minds Behind SimpleCell appeared first on UC Berkeley Sutardja Center.