Goodwill Computing Lab focuses on improving the operational efficiency and cost effectiveness of large-scale parallel computing systems by designing novel computational analytical tools and models to improve the scalability, reliability, and performance of HPC. Our approach consists of three steps: (1) posing unconventional questions and formulating hypothesis, (2) designing and conducting novel experiments to uncover new insights about large-scale HPC systems, and (3) validating our findings and techniques on real-systems and providing provable mathematical guarantees. 

We develop new analytical models, tools, and devised novel techniques that improve the reliability, power-efficiency, and resource-utilization of large-scale data-centric systems. Our techniques and tools benefit many large-scale data-intensive applications that produce, analyze, and manage terabytes of data per day on large supercomputers. We also enthusiastically apply resilience, high performance computing, and data analytics expertise to emerging inter-disciplinary research domains.

Goodwill Computing Lab also focuses on preparing next generation of students and educators to take advantage of parallel computing systems to solve problems of societal importance. The key to successfully realizing this vision is developing educational activities and material related to parallel computing and integrating the material at appropriate educational levels. To this end, we design and create new educational activities to train next generation of HPC researchers.

Please visit our team, research publications, and educational outreach webpages to learn about us and our work.

What did we name our research group Goodwill Computing Lab? 

As a group when we began to brainstorm about what should we focus on optimizing as a primary metric, we considered multiple traditional metrics including performance, cost, energy-efficiency, resilience, security, privacy, and scalability. Finally, we realized that “goodwill-ness” is a single metric that encapsulates everything and is a natural metric if we begin to treat computers as humans.

High performance computing systems have enabled us to achieve capabilities in real life that were even hard to imagine a few decades ago. However, we realize we are also at the verge of very exciting times where computing capabilities can potentially do more harm than good, if not exercised in good faith. Increasing cyber attacks and privacy breaches using powerful computing systems and sophisticated algorithms are a few examples of such imminent danger. Therefore, the long-term goal  is to increase the “goodwill-ness” of high performance computing systems.

We envision, much in alignment with our friends in the robotics field,  that in future, computers will be an integral part of the society and we will treat them as humans. And, how do we judge humans in an incentive-free, self-sufficient society? By their “goodwill-ness” (i.e., empathy, willingness to help others, willingness to advance capabilities for everyone’s good). This is how we will judge computers eventually. The only trouble is that it is not easily measurable and quantifiable, but has got all the right intent.

Why are high performance large-scale computing systems so exciting?