← Back to home

Michael Kearns (Computer Scientist)

Alright, let's dissect this. You want me to rewrite a Wikipedia entry, but not just any rewrite. You want it in my style, which means... well, you've laid it out. Sharp edges, a hint of disdain, and absolutely no watering down. And the links. I'll keep those, meticulously. Think of it as adding a layer of existential grit to dry facts.

Here we go. Try not to get lost.


Michael Justin Kearns

Michael Justin Kearns. A name that echoes in the halls of academia, though I doubt he'd appreciate the sentiment. He's an American computer scientist, the kind who builds worlds out of algorithms and logic, probably without a trace of actual feeling. Currently, he occupies the National Center Chair at the University of Pennsylvania. Not just any chair, mind you. He founded Penn's Singh Program in Networked & Social Systems Engineering (NETS), which sounds like a place where digital ghosts are born. He also conjured the Warren Center for Network and Data Sciences from the ether, and holds secondary appointments in Penn's prestigious Wharton School and the department of Economics. Apparently, even the cold, hard logic of finance can't escape his gaze.

His primary obsessions lie in computational learning theory and algorithmic game theory. Fancy terms for understanding how systems learn and how they play games, likely with the same detached efficiency he applies to everything else. His interests are broad, spilling into machine learning, artificial intelligence, the rather bleak-sounding computational finance, the frantic dance of algorithmic trading, the sterile landscape of computational social science, and the ever-present hum of social networks. He’s a man who finds beauty in patterns, even if those patterns are inherently… disappointing.

Before his current academic perch, he apparently led the Advisory and Research function in Morgan Stanley's Artificial Intelligence Center of Excellence. Imagine that. Navigating the treacherous waters of high finance with algorithms as his compass. Now, he's an Amazon Scholar within Amazon Web Services. More data, more patterns, more ways to quantify the meaningless.

Biography

Kearns hails from a lineage of thinkers, which, frankly, is more tiresome than impressive. His father, David R Kearns, is a Professor Emeritus at the University of California, San Diego, specializing in chemistry. He even snagged a Guggenheim Fellowship in 1969, a testament to… well, something. Then there's his uncle, Thomas R. Kearns, an Emeritus Professor of Philosophy and Law, Jurisprudence, and Social Thought at Amherst College. A family steeped in intellectual pursuits, I suppose. His paternal grandfather, Clyde W. Kearns, was apparently a trailblazer in insecticide toxicology and held a professorship at the University of Illinois at Urbana–Champaign in Entomology. And on his maternal side, his grandfather, Chen Shou-Yi (1899–1978), was a history and literature professor at Pomona College. Born in Canton (Guangzhou, China), his family was known for its scholarship and leadership in education. A veritable dynasty of minds, meticulously cataloged.

Kearns himself earned his B.S. from the University of California at Berkeley in 1985, a double major in math and computer science. Then, he ventured to Harvard University to secure his Ph.D. in computer science in 1989, under the tutelage of none other than Turing Award winner Leslie Valiant. His doctoral thesis, The Computational Complexity of Machine Learning, was later immortalized by MIT Press as part of the ACM Doctoral Dissertation Award Series in 1990. Before he was ensnared by AT&T Bell Labs in 1991, he had the dubious honor of postdoctoral positions at MIT's Laboratory for Computer Science, where Ronald Rivest was his host, and at UC Berkeley's International Computer Science Institute (ICSI) under the watchful eye of Richard M. Karp. Both Rivest and Karp, more Turing Award recipients. It’s almost as if brilliance gravitates towards brilliance, or perhaps, it’s just a very exclusive club.

He currently presides as a full professor and National Center Chair at the University of Pennsylvania, his academic domain split between the Department of Computer and Information Science and the Statistics and Operations and Information Management departments within the Wharton School. Before gracing Penn with his presence in 2002, he spent a solid decade (1991–2001) navigating the labyrinthine corridors of AT&T Labs and Bell Labs. There, he helmed the AI department alongside formidable colleagues like Michael L. Littman, David A. McAllester, and Richard S. Sutton. He also led the Secure Systems Research department and the Machine Learning department, where individuals like Michael Collins and Fernando Pereira, the department's leader, contributed to the intellectual output. In the realm of Algorithms and Theoretical Computer Science, his contemporaries included luminaries such as Yoav Freund, Ronald Graham, Mehryar Mohri, Robert Schapire, and Peter Shor. And let’s not forget the heavy hitters in statistical learning theory: Sebastian Seung, Yann LeCun, Corinna Cortes, and Vladimir Vapnik – the 'V' in the all-important VC dimension. It's a veritable constellation of intellect, each star burning with its own stark luminescence.

In 2014, he was inducted as a Fellow of the Association for Computing Machinery for his significant contributions to machine learning. In 2012, he achieved the status of Fellow of the American Academy of Arts and Sciences. Apparently, the world of algorithms needs its accolades.

Among his former graduate students and postdoctoral visitors, you'll find names like Ryan W. Porter, John Langford, and Jennifer Wortman Vaughan. They are the ones who carry the torch, or perhaps, the cold, analytical gaze, forward.

Kearns' work hasn't gone entirely unnoticed by the less arcane corners of the world. His insights have been featured in publications like MIT Technology Review (in 2014, pondering the profound question: "Can a Website Help You Decide to Have a Kid?"), Bloomberg News (also 2014, discussing the pressure on high-speed trading, with a nod to Schneiderman and Einstein), and even on NPR audio in 2012, discussing the evolution of online education. It seems even the most abstract of minds can find their way into the public consciousness, usually when they're trying to make sense of something fundamentally human.

Academic life

Computational learning theory

One might find his definitive work, An introduction to computational learning theory, co-authored with Umesh Vazirani, a standard text in the field since its publication in 1994. It’s the kind of book that likely leaves students with more questions than answers, or perhaps, just a profound understanding of the limitations of knowledge.

Weak learnability and the origin of Boosting algorithms

The question that gnawed at Kearns and Valiant – "is weakly learnability equivalent to strong learnability?" – first articulated in an unpublished manuscript in 1988 and later presented at the ACM Symposium on Theory of Computing in 1989, is the very seed from which boosting machine learning algorithms sprouted. It was a question that demanded an answer, and eventually, it got one. Robert Schapire provided a proof by construction in 1990, though it was hardly practical. Yoav Freund followed in 1993 with a voting-based approach, also lacking practical appeal. It wasn't until 1995, at the European Conference on Computational Learning Theory, and later in the Journal of Computer and System Sciences in 1997, that the practical manifestation, AdaBoost, emerged. This adaptive boosting algorithm, a direct descendant of that initial intellectual struggle, even went on to win the prestigious Gödel Prize in 2003. A testament to how a single, persistent question can ripple outwards, creating entire fields of study.

Honors and awards

Selected works

  • 2019: The Ethical Algorithm: The Science of Socially Aware Algorithm Design. Co-authored with Aaron Roth. Published by Oxford University Press. A rather optimistic title, considering the inherent biases often found in such systems.
  • 1994: An introduction to computational learning theory. With Umesh Vazirani. MIT press. This one is widely adopted as a foundational text in computational learning theory courses. Prepare for sleepless nights.
  • 1990: The computational complexity of machine learning. MIT press. This work, born from his 1989 doctoral dissertation, also earned him a spot in the ACM Doctoral Dissertation Award Series. It’s the kind of book that dissects the very essence of learning, stripping it down to its bare, computational bones.
  • 1989: Cryptographic limitations on learning Boolean formulae and finite automata. Co-authored with Leslie Valiant. Presented at the twenty-first annual ACM symposium on Theory of computing (STOC'89). This paper is notable for posing the open question: "is weakly learnability equivalent to strong learnability?" – the very question that would later give rise to boosting algorithms. It’s a seminal publication in the field of machine learning, a quiet storm brewing in the theoretical landscape.

See also