The value of poster presentations in scientific conferences

We share scientific value with the broader scientific community in three main ways: on peer-reviewed platforms, non-peer-reviewed platforms, and through research presentations. Here, I focus on one form of the third method: poster presentations. In July 2024, I will deliver a two-hour poster presentation daily for a week. This blog post is written to understand the nature of a poster presentation, my goals through the poster presentation, and my strategy during the poster presentation.

First, I discuss the nature of a poster presentation. A journal paper is passive and objective, while a poster presentation is dynamic and personal. A journal is a final product, pre-compiled. In contrast, a poster presentation is a one-on-one or one-to-few session that we can tailor on the fly. With each sentence and paragraph, we introduce new words or concepts and receive feedback through facial expressions, voice, words, and body language. This gives us a unique chance to customize our value to the engaged person based on immediate feedback. A poster session requires soft skills, such as self-awareness. Each person has a different level of familiarity with the topic, making each poster session a new talk with a different audience.

Unlike an oral presentation, the audience has the freedom to decide to stop by the poster. This freedom allows participants to be more engaged and feel more personal during the session than during an oral presentation, where asking questions may be difficult due to fear of revealing a lack of understanding to the wider audience. Topics during an oral presentation may not interest the audience.

Next, I discuss the goal of a poster presentation. A successful journal paper impacts the field and is cited by other journals. A successful poster transforms the interaction into a professional relationship that serves the interests of both parties during and after the conference. We attend conferences to build new relationships and reinforce existing ones. A poster session is an integral part of achieving this goal. In June 2024, I attended a four-day workshop. Training and tutorial videos were available online from previous years, but they lacked the experience of forming interpersonal relationships. During meals, I attempt to initiate shallow interpersonal relationships by discussing non-science topics and remain as a pleasant person. This is important for my strategy next.

I, first, identify two types of audiences: those to whom I can provide value for their studies or careers, and potential colleagues working on similar problems in my field to discuss ideas and seek feedback. These audiences may turn into trusted professional relationships, which we aim to cultivate.

During my poster session, I create an environment where both parties can feel vulnerable. I start by sharing my own vulnerabilities and weaknesses, making the other person more comfortable sharing theirs. For example, I mention the work I have done and the work of others, highlighting why they were helpful due to my lack of skill. Vulnerability helps us understand each other’s needs. Complementary skill sets form a professional relationship. Additionally, discussing non-science topics during breakfast, lunch, or between sessions is important because we are more likely to be vulnerable with people we have talked and shared jokes with.

I observe non-verbal cues. Poster sessions have no set timeline. Engagement time depends on the other person. A simple way to gauge interest is by the direction of their feet and side-eye movement. If they are pointed away from the poster, the person is not engaged, and it’s time to either ask about their interests or end the presentation.

It is difficult for the audience to read the sentences on the poster and listen to the content simultaneously. The goal is to be interactive and identify how both parties can benefit each other, not one-directional like an oral presentation. We pause, ask questions, seek their feedback, and inquire about their interests, creating a back-and-forth dialogue.

A QR code or URL is not enough. If possible, a poster should have a demo. For open-source tools, I bring my own laptop, place it on a round cocktail table, and demonstrate. The demo must be intuitive, useful, and flawless. For non-harmful materials, I believe it’s a good idea to bring the materials and share them with the audience on the spot. I recall a materials science course where the instructor brought different types of materials, and we could sense and feel them. That is engaging and intriguing for most.

By the end, the poster must have a clear call-to-action. If I have identified the two types of audiences mentioned, I would invite them to stay connected and exchange contact information, have lunch or dinner afterward to build trust.

To reiterate, we attend in-person events to cultivate professional relationships that benefit each other’s careers. A poster is a session that allows one to identify a group of audiences that may turn into a professional network. During a poster, we remain open, interactive, and vulnerable to identify each other’s needs and find complementary skills. If both agree, we collaborate to advance each other’s careers.

Practice is costly

The term “practice” in school is associated with “exams” and “problems”. They serves as a checkpoint to gauge a student’s understanding of learning material and their ability to apply core concepts.

Regrettably, practice demands time and stamina. In Fall 2023, I enrolled in a course on Phonons. One practice problem asked to analytically express the expected position of an anharmonic oscillator from the hamiltonian. My initial attempt had 3-4 pages of unorganized derivations. A mistake early on could necessitate redoing the entire set of derivations. Once I understood the overall scheme, I refused to engage in another practice run. Each practice run was associated with physical pain.

I paused. I had to minimize the physical labor, time, and potential areas for mistakes. I stared at my written work. I used my index fingers to locate sections where I could re-express them in symbolic forms while ensuring clarity for the grader. To reduce strain and increase efficiency during practice, I switched to using a whiteboard. Before each run, I spent more time on the evaluation process. At the end, I streamlined the derivations to 1-2 pages. The day before the exam, I practiced again, to check my preparedness. Yet, I minimized the time required to practice by solving problems with my eyes closed. On exam day, I left the classroom early. I had attained the desired academic performance.

My experience underscores that practice itself is not the goal. The objective of practice is to identify the flaws and inefficiencies in our current techniques. Between practice runs, we consciously refine, reinforce, and eliminate the current technique through imagination, discussions with colleagues, and literature surveys.

The ideal goal is to allocate the least amount of resources to practice runs yet perform at the expected level. Abraham Lincoln famously said, “If I had six hours to chop down a tree, I’d spend the first four hours sharpening the axe.” If Abraham Lincoln were to participate in a chopping competition, he would have experimented with different materials for the axe, modified the saw’s shape, and adjusted the weight balance during those four hours daily. Refinement through the evaluation process between practice runs is what allows us to protect our finite resources and perform at the highest level.

July 8, 2024, 160 Claremont Ave, New York

My approach to daily work

“Slow but steady wins the race” is the moral of The Tale of the Turtle and the Rabbit. Unfortunately, the phrase is contradictory in practice. If we are slow and steady with no acceleration, we do not win the race. This was a random thought.

We run a race called a day. A day has no finish line but a timeline. To me, winning the day means improving my skills, feeling fulfilled, and staying motivated for the next day. After years of trial and error since I returned as a full-time student four years ago, I am proud to say I have developed techniques and mindsets that allow me to win daily. Here is my approach.

My race begins the night before. I sleep 8 hours and usually wake up without an alarm. I have examined the number of hours of sleep my body needs to wake up naturally and stay productive throughout the day. It is 8 hours. Sleeping fewer hours than needed indicates I am not productive during the day or I lack time management skills.

During the race, I eliminate visual and sensory cues that might derail me. The book “The Power of Habit” states that most of our actions are cued subconsciously from the environment. I remove these root cues. I have no email or group messaging apps. The apps are only installed on my mobile phone, which remains muted and out of sight. I willfully check emails and messaging platforms only during breaks. This way, my actions remain controlled, rewarding myself with checking information at the allowed time by following the focus technique next.

I focus for 4-hour blocks using the 50-minute focus and 10-minute rest technique. I view the brain like any other muscle in the body. Fortunately, the brain can be used in full mode for an extended time. However, if I were to maximize the number of pull-ups in 24 hours, I would space out the sessions. During the 4 hours, I turn on a video available on my desktop of a person conducting a 4-hour session on my side monitor with time provided. The person on the monitor provides great accountability for staying in the race and serves as an internal clock. For each session, I record in a plain text editor the time and tasks I have accomplished. I do not use Notion or full-featured note-taking apps to avoid cues. The plain text file filled with completed tasks provides a sense of achievement and momentum.

Every 50 minutes, I take a 10-minute rest to allow my eyes and brain to both relax and consolidate information. During the 10-minute rest, I listen to music or play mini basketball at home. On a normal day, I conduct 2 sets of 4-hour blocks. Beyond the timed sessions, I explore ideas, gain new knowledge, write blog posts, and learn a foreign language without tracking the time. I exercise, either between the two blocks or after the two blocks, by playing basketball or doing a compound body workout of pull-ups and dips.

During weekends and holidays, I do not force myself; a regular race does not apply. I work at less than half the intensity without tracking time. I relax, read books, and enjoy time with my family. I am happy to rest because I have had a fulfilling week. We need to rest to remain happy, appreciative, and fulfilled. Relaxation is often accompanied by daydreaming and the exploration of ideas. When I have good ideas, I record them briefly on my phone. I give myself permission to rest and strategize for my goals. Then, I begin my race again.

July 5, 2024, 160 Claremont Ave, New York

Thoughts on rejection

Rejection is a form of failure. Failure is the inability to meet expectations. Here, I present two types of rejections and how I navigate my life.

As of this writing, I encounter rejections every one or two months. My recent paper was rejected by a journal, and my request for collaboration was declined. As I further advance my career in academia, I expect these intervals to decrease.

I do not use the phrase “don’t take it personally.” There is nothing more personal than spending one’s invaluable and finite resource called time.

Nonetheless, rejection is inevitable. We compete for finite resources provided at each level of our career.

We are on a ship called a career. I view rejection as a reef in the ocean. When the ship hits the reef, there are two outcomes: it either sinks the ship or alters its velocity with broken parts.

The first type of rejection may destroy the ship and provide no further opportunity to advance one’s career. In most cases, however, we encounter the second type of rejection, which alters the velocity of one’s career and requires repairs. Not to mention, some ships are equipped with special radars called mentors and knowledge that prevent the ship from encountering the reef. However, once the ship enters uncharted territory, it will inevitably encounter a reef at some point.

When I hit the reef, I retreat and reflect. During the repairs, I locate where the reef was on a map. I ask myself why I navigated toward the reef and whether I can avoid it next time. I seek advice from mentors and books on how they have navigated the path. Then, I embark again.

July 4, 2024, 160 Claremont Ave, New York

Failure framework: experimental, expensive, pivotal, and avoidable

Failure is the inability to meet an expectation. The expectation is the key component. The position of the expectation dictates the outcome as a success or a failure. The position is associated with quality, standards, regulations, and laws. Individuals, organizations, and nations set different positions.

The exepectation sets the state of the outcome. The analogy of a glass being half-empty or half-full is an expectation-based result. If I expected the glass to be full, I view it as half-full. If I expected it to be empty, then it’s half-empty. The state is based on the expectation before observing the water level.

Not all failures are the same due to differences in resource allocation, the magnitude, and the reversibility of the consequences. For example, the failure to maintain a server for financial applications is incomparable to other server failures. While the functional expectation is the same, the reversibility and magnitude of the consequence differ. Here, I present four types:

The first type is experimental failure, characterized by a high level of reversibility and repeatability with minimal resources. It is commonly observed in the research and development stage. Examples include receiving bug reports from users and collaborators. As a student, I strategically use experimental failure for exam preparations by writing exam-like questions on flashcards. I am expected to know the answers a day before the exam. Discovering problems I am unable to solve, I repeat the problems until I meet the expectation with confidence. The phrase, “fail fast, fail often,” is appropriate here. It is a great way to test one’s product and software integrity, provided the consequences are minimal and the process is repeatable.

The second type is expensive failure. “Expensive” often relates to value. I prefer it over “costly,” which solely connotes negativity. Not all failures are expensive, as they require substantial resource allocations. In machine learning, this could be attaining sub-optimal performance in trained models. In simulations, it is failing to reach convergence after a weeks-long effort. In experimental work, it is the failure to validate a hypothesis after 3 to 6 months of dedicated work. For researchers, it includes manuscript and grant rejections. For students, it could be poor midterm and final exam grades. As a junior in college, I enrolled in a graduate-level electrical engineering course called Deep Learning. Despite failing to meet my grade expectation, the failure was accompanied by tremendous knowledge gained.

Professionals encounter expensive failures. While the consequences are significant and could potentially cost one’s job, it is important to recognize that such failures require a substantial mental commitment to achieve high expectations. Those who achieve these expectations often reap benefits not available to those who do not attempt to do so. Therefore, although deciding to take on such risks involves potential downsides, I believe that (1) the willingness to allocate substantial resources to achieve high expectations, (2) the ability to take ownership of failures, and (3) the capacity to make improvements are prerequisites for success.

The third type is pivotal failure. This failure significantly affects one’s life trajectory and is often associated with a great magnitude of consequence and irreversibility. Examples might include failing to find a job in a specific industry, being rejected from programs, failing licensing exams, or losing an election as a politician. Those with high expectations may encounter this pivotal failure more frequently due to the scarcity of available resources.

The fourth type is avoidable failure. These failures are best avoided as they are not only irreversible but also costly. They involve failing to achieve expectations set by regulations, laws, and practices. Examples include failing to meet safety checks required to operate a lab, committing academic plagiarism, or failing to comply with regulations and laws. These failures result in wasted resources and are best learned from the mistakes of predecessors.

This framework offers a way to categorize and understand failure. However, the four types of failure can coexist in varying proportions. For instance, some experimental failures may also be avoidable or even pivotal.

June 16, 2024, 160 Claremont Ave, New York

Two types of innovation and evaluation

In Christensen’s disruptive innovation theory, innovation is categorized into two types. The first type improves on earlier metrics established by the community. In quantum physics and chemistry, scientists develop approximation techniques that solve the Schrödinger equation and match experimental results. The performance of neural networks for image classification was measured based on metrics from the CIFAR-10 and CIFAR-100 image datasets.

The next type of innovation, while underperforms in the primary performance, introduces a secondary performance. This secondary performance appeals to a niche group. Density Functional Theory (DFT) has introduced a new performance measure of computational efficiency by using the three coordinates as electron density to solve the Schrödinger equation. DFT has enabled materials scientists to employ the tool for phase transitions and kinetics.

With the two types of innovation discussed, we strive to measure performance with both objective and subjective measures. For the second type, this is difficult. Jensen Huang from Nvidia said, “I find KPIs hard to understand.” “What’s a good KPI?” “Gross margins, that’s not a KPI. That’s a result.” Nvidia is known for investing in new fields such as computational drug discovery and materials science, beyond its origins as a computer-graphics chip design firm.

There are established measurables in academia and finance for evaluation. These are results. Results are goals. We do not constantly measure goals; they are our guiding stars. We observe and decide which star to follow. The destination is not the star itself; we use it as a tool to navigate our lives. Instead, we measure our velocity and operation aligned with these stars.

Hence, I must develop my own subjective criteria to evaluate my progress for the second type of research. There is no checklist. I ask open-ended questions and determine my progress based on my confidence in answering these questions.

First, I ask whether my research output provides immediate value to a niche group of scientists. Second, I consider whether it has the potential to attract users beyond the existing community. The commonality between DFT and personal computing is their ability to attract a new cohort of users with secondary performance measures—efficiency for DFT and ease of use for personal computing. Lastly, I assess whether my research outcomes have the potential to be adopted by the existing scientific community.

June 12, 2024, 160 Claremont Ave, New York

Goal

I have goals. I think about my most cherished goal every few hours. I sleep and wake up with it. This goal serves as a guiding star, providing a sense of direction regardless of the circumstances. Thinking about the goal itself is magical. It generates a sense of purpose. All my actions and time are directed towards it. Any work that may seem trivial on its own is no longer trivial. It is a step required to achieve the goal.

The goal itself does not provide detailed action plans. Instead, my brain subconsciously explores options and proposes action steps required to achieve the goal. New action steps materialize when I am resting. I record them on a device. I use my conscious brain to filter and prioritize them.

I record what I need to accomplish today, this week, this month, this quarter, this semester, and this year. I have daily to-dos. I do not always check off all the list items. I focus on what I have accomplished. As my brain is explorative with ideas and action steps, there are always more than I can complete in a day. If there aren’t enough, I ask for more.

For my daily hours, I utilize a 50-minute focus, 10-minute rest technique, averaging 10 to 12 hours a day at home with no distractions. I play basketball or listen to music between sessions. I prioritize my physical health above all else. I do not need to force myself. I just consistently work towards the goal.

June 9, 2024, 160 Claremont Ave, New York

Embrace duality for excellence

An electron can be modeled with states such as “spin up” and “spin down,” among others described by quantum numbers. These varying states coexist in superposition until one of the states is observed with a certain probability.

Similarly, multiple states of emotions and thoughts may coexist. Our mental state is not binary. We may express a specific mental state—either happy or sad—only when we state it, similar to how an electron manifests a single energetic state when measured. The written or verbal mental statement may not depict the superposed states. The expressed state merely has a higher probability than other states, similar to the probable observation of the lowest energy state at room temperature in electrons.

Elite athletes, such as Michael Jordan (MJ), exemplify both confidence and humility. MJ scored game-winning points in NBA and NCAA championship games. Yet, he also displayed humility by working to improve his three-point shooting percentage and transform into a mid-range shooter with his trainer, Tim Grover, for nearly two decades. MJ showed a willingness to listen and adhere to practice and diet routines. In practice, he was ruthless, yet he could not hold back his tears after winning his sixth championship, cradling the trophy in his arms. Duality and plurality of emotions may coexist. The probability of each emotional state is merely altered by circumstances, similar to how temperature influences the distribution of electronic states.

I build open-source programs that help experimentalists analyze synthesized crystal data. I design data structures for crystal geometries, develop command-line user interfaces, and generate publication-quality figures. I am proud and confident in my ability to deliver results. Nonetheless, I recognize that my craftsmanship can still be elevated compared to other open-source projects. Kobe Bryant said, “Once you know what it is in life that you want to do, then the world basically becomes your library. Everything you view, you can view from that perspective, which makes everything a learning asset for you.”

I could enhance my code by using matrices to compute atomic distances instead of relying on for-loops. I could improve the flow of the command-line interface by seeking feedback from users without programming expertise. My goal is not just to create good programs that merely work. I aim to craft phenomenal inventions that are loved by my users. I invest my time—a part of my life—in learning and applying unit testing, static type checking, continuous integration, and any other practices that elevate my craft. No audience watches elite athletes’ individual practices in the morning, but that is where their legacy begins.

June 2, 2024, 160 Claremont Ave, New York

Writing principles

Words in a sentence are ordered based on purpose. While the following sentence, “The role of titanium is discussed in the first section,” is clear in meaning, the first 4 words do not address the purpose of informing the structure of the paper. I instead write, “First, we discuss the role of titanium.”

A sentence requires precise words to avoid miscommunication. In fields such as the military and medicine, professionals avoid colloquial words. Surgeons use the term “correct” instead of “right” during procedures. I recently rewrote “Find the number of atoms in a formula” to “Count the unique elements in a chemical formula.” In this context, “formula” could refer to a mathematical formula, and “Find the number” is replaced with “count.” “Elements” refers to unique atom types.

A sentence is abridged with the correct noun-verb pair. Adjectives describe nouns. Adverbs describe verbs. Adjectives and adverbs may not be necessary when the noun-verb pair adequately conveys the meaning.

A sentence should not start with symbols. If an acronym is unfamiliar to the audience, the full name should be used multiple times.

A sentence must be clear, descriptive, and concise in that order.

I do not connect two or more full sentences with conjunctions. A single sentence is easier to read. A conjunction should be used sparingly. “Less is more,” Steve Jobs said.

In programming, we refactor comments, names, and structures after the function is implemented. In writing, words, sentences, paragraphs, and outlines are refactored after the meaning is conveyed.

Repetition is not harmful if it enhances clarity.

I avoid using adjectives and adverbs to prevent miscommunication between colleagues and readers. These parts of speech are immeasurable.

June 1, 2024, 160 Claremont Ave, New York

Purpose of research

In crystallography and solid-state science, in particular, research serves to (1) discover new substances with potential utility, (2) propose methods, and (3) characterize the underlying structure and phenomena with a category such as space group.

Here, the unifying theme is to produce new knowledge. The main difference between humans and others is our ability to store and retrieve generational knowledge across time and space. Hence, the production and propagation of knowledge is a human-like endeavor. This activity equips us with the materials and methods to become the apex predators in the animal kingdom.

In contrast, this very knowledge also equips us with the capacity to destroy ourselves. Laws, regulations, and orders impose boundaries on individuals, organizations, and nations. The boundaries prevent the misuse of power derived from this knowledge. This is evident in recent history, particularly from the 1940s onwards. Hence, research is a sacred activity; it is a human activity, aimed at advancing our civilization by producing new knowledge.

June 1, 2024, 160 Claremont Ave, New York

“Good luck!”

I favor the word “serendipity” over “luck.” Both words represent an unexpected beneficial outcome. They differ in terms of self-direction and initiation. The well-known explorer isn’t merely lucky to discover new lands. I am not merely lucky to have crossed the Pacific Ocean to be in one of the lands. I decided to be here. Yes, I do not neglect what I was provided with. Nonetheless, serendipity implies a degree of will.

As a student, serendipity is the occurrence of exam problems I’ve solved several times before. As a researcher, it’s the moment I figure out how to optimize data structures and create incredible figures with Matplotlib, or when I discover open-source code that helps me learn computing with matrices. As a writer, it’s the moment when just a pair of a noun and a verb forms a better sentence. The term “serendipity” encapsulates this sense of self-initiative and direction towards solving a problem. So, when I say “Good luck,” it has a meaningful context too.

June 1, 2024, 160 Claremont Ave, New York

The Structure of Scientific Revolutions - Thomas S. Kuhn

It is widely accepted that science evolves linearly based on the body of knowledge curated by predecessors. The phrase “…standing on the shoulders of giants” by Issac Newton embodies the notion that scientific advancements are built atop existing theories and concepts. However, The Structure of Scientific Revolutions by Thomas S. Kuhn claims scientific revolutions are neither linear nor cumulative.

According to Kuhn, not all theories and concepts are defined as “revolutions.” Kuhn indicates that scientific revolutions only occur when there is a shift in the existing paradigm to a new paradigm within a scientific community. A paradigm is described as a standard of equations, techniques, apparatuses, and educational systems that a scientific community has embraced and practiced. A paradigm provides the common technical vocabulary that allows scientists in the community to articulate concepts and collectively conduct experiments. In the early seventeenth century, Newton’s Principia Mathematica provided a set of equations and techniques that gave rise to the doctrine of classical physics, also referred to as Newtonian physics. Newtonian physics has become the standard paradigm that models the motion of particles and gravity. However, Kuhn argues that long-standing paradigms often fail to explain the anomalies observed in nature, which causes community members to lose trust in the existing paradigm. Scientists, often young and new to the field, seek a new set of equations radically distinct from the existing paradigm. Eventually, one dominant theory emerges and displaces the old paradigm, as in Einstein’s general theory of relativity, which accounted for the anomaly found in the orbit of Mercury that Newtonian physics failed to explain. Kuhn outlines the process of scientific revolutions with a framework consisting of 4 phases, pre-science, normal science, crisis, and revolution, whose last phase is resolved by a paradigm shift.

The first phase within Kuhn’s framework of scientific revolution is defined as pre-science. While individual scientists attempt to discover new theories during pre-science, there is no dominant set of equations, techniques, and concepts referred to as a paradigm. During the pre-paradigmatic period, scientists observe and collect facts. Due to the lack of a common paradigm, scientists within each pre-paradigmatic school confront one another and interpret these facts in different ways. Pre-science is further characterized by a lack of common scientific vocabulary. The lack of common language hinders collaboration amongst scientists and schools. Thus, Kuhn describes pre-science as the least productive phase in the framework.

The transition from pre-science to normal science occurs as one set of theories and concepts becomes dominant within the scientific community. The distinction between pre-science and normal science is the existence of a paradigm. Kuhn explains that normal science “is predicated on the assumption that the scientific community knows what the world is like,” comparing a paradigm to a “map” that guides scientists towards modeling nature. Kuhn illustrates that research within the paradigm of normal science is also analogous to “puzzle-solving,” where the problems and questions within the paradigm are scattered pieces of solvable puzzles. The puzzle pieces are fit together in a complete shape through refinement and precision. The comparison of a paradigm to a map and puzzle-solving assumes that the scientific community is capable of knowing nature guided by the paradigm. The period of normal science is marked by cumulative and linear developments facilitated by advancements in measuring devices and techniques. Newton’s universal law of gravitation in Principia Mathematica published in 1687 approximated the Moon’s orbital period based upon the principle that attractive gravitational force exists between two objects. Furthermore, using the same principle, Newton predicted the motion of other planets in the Solar system. Within the paradigm of normal science, research questions and facts collected serve to support the existing paradigm. Normal science is not focused on novelty but rather precision and confirmation.

The transition from normal science to crisis takes place when new inexplicable findings referred to as anomalies threaten the foundation of the existing paradigm and cast widespread doubt within the scientific community. As measuring techniques and devices improve, anomalies become easier to detect and harder to be avoided within the scientific community. The anomaly in Newtonian physics was first observed by Le Verrier, a French astronomer, in 1859. Through Le Verrier’s improved mathematical technique of predicting the motion of Mercury, he discovered that there was a 43 arcsecond per century discrepancy between the theoretical value of Newtonian physics and the observed precession of the perihelion of Mercury. Perihelion is the point in the orbit of a planet nearest to the Sun. One of the ways scientists respond to an anomaly is by devising ad hoc modifications of their theory in order to eliminate any apparent conflict within the paradigm. In response to the discrepancy in Mercury’s precession, some scientists that defended Newton’s paradigm assumed that there was an invisible dusk between the Sun and Mercury that affected the precession. Others proposed a new planet, Vulcan, orbited close to the Sun and was responsible for the discrepancy. As the anomaly remains inexplicable within the existing paradigm, scientists in the community become more critical of the paradigm and begin to question its underlying foundations. The widespread acknowledgment of these inconsistencies within the existing paradigm and the introduction of new theories illustrate the defining characteristics of crisis. During a crisis, scientists, often young and less invested in the existing paradigm, seek theories outside the boundary of the paradigm in order to explain the anomaly.

An alternative paradigm is established when a new set of theories and concepts that explains the anomaly becomes widely accepted by the scientific community. In the case of the 43 arcseconds per century anomaly found in the precession of Mercury, it was Albert Einstein’s general theory of relativity published in 1915 that precisely modeled Mercury’s orbit without discrepancy. Einstein’s new theory superseded Newton’s universal law of gravitation and became the standard for predicting a planet’s orbit. The displacement of the old paradigm by a new paradigm marks the defining characteristic of Kuhn’s fourth phase of revolution, in which the newly constituted dominant paradigm entirely reconstructs the fundamental methods, generalizations, and rules of the old paradigm. The shift to Einstein’s theory of relativity in which time and space are not fixed demonstrates that the foundations behind a new paradigm are not cumulative but rather radical. However, Kuhn notes the cyclical and periodic nature of these paradigm shifts or transformations, in which scientific revolution circles back to the period of normal science. After a new paradigm is introduced, the community enters the phase of normal science with scientists of the new order aiming to improve the precision of the paradigm. In the case of Einsteinian physics, the theory of general relativity was further used to predict the movement of the precession of perihelion in other planets. Just as in the shift from Newtonian to Einsteinian physics, according to Kuhn’s framework of scientific revolution, scientists of Einsteinian physics will discover anomalies that lead to new crises, and the established paradigm will again be transformed.

While scientific revolutions accomplished by paradigm shifts within a scientific community seem to advance the knowledge of science towards truth, Kuhn maintains that the role of scientific revolution lies in providing a new “map” that serves to temporarily guide scientists until anomalies are observed. Furthermore, there is no linear progression towards truth but only periodic rise and fall of paradigms. As Kuhn describes, “Einstein’s general theory of relativity is closer to Aristotle’s than… to Newton’s.” Based on Kuhn’s analysis, Newton’s phrase “…standing on the shoulders of giants” is partially accurate during the period of normal science. However, the structure of scientific revolutions as a whole is neither cumulative nor linear but rather a cycle of paradigmatic transformation.

Fall 2021, EID 367, The Cooper Union

The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail - Clayton M. Christensen

Despite seemingly sound managerial practices, such as listening to existing customers and continued investment in technology, great companies are often displaced as market leaders. According to Clayton M. Christensen, in The Innovator’s Dilemma: When New Technologies Cause Great Firms to Fail, it is precisely due to these widely accepted practices of good management that leading firms are confronted with great profit loss and eventual bankruptcy. The fall of these incumbents begs the question, “Why do great companies fail?” In his book, Christensen highlights the failure of incumbent firms to recognize and respond to new technological developments as the source of their decline.

Christensen coined the phrase the failure framework which illustrates the process of how certain types of new technologies cause established companies, also referred to as incumbent companies, to fail. Christensen defines technology as either sustaining or disruptive. Sustaining technology iteratively and incrementally improves upon an existing performance measure, a requirement or a specification demanded by customers. In the hard disk drive industry in the 1980s, mainframe manufacturers such as IBM demanded large storage capacity requirements of 300 to 400 MB from the 14-inch hard drive. Disruptive technology, often developed by entrant companies, in contrast, initially underperforms in comparison to the performance measure of sustaining technology but introduces a new performance measure. In the same hard disk drive case study, Christensen cites the architectural innovations of the 8-inch hard disk drive as disruptive technology, which introduced a new performance measure of small size demanded by desktop and mini-computer manufacturers such as Hewlett-Packard. Disruptive technology initially fails to meet the performance requirement of the incumbent companies’ customers and must find a new or niche market that values the newly introduced performance measure for survival. The 8-inch drive with the storage capacity of 40 MB initially failed to meet the storage capacity demand of the mainframe manufacturers. Christensen’s failure framework describes how, despite this initial underperformance, disruptive technology eventually causes the failure of established companies. The failure framework consists of three principle components, technology maturation, performance oversupply, and resource dependence.

Technology maturation, the first component in Christensen’s failure framework, introduces a challenge for sustaining technology to maintain the rate of improvement in performance. Technology maturation occurs at the end of the Technology S-Curve. The Technology S-Curve is a graphical representation which demonstrates the rate of performance improvement with performance on the vertical axis and engineering effort or time on the horizontal axis. As the curve progresses towards the mid-section in the horizontal axis, the slope of the curve increases as technology becomes more understood and more resources are allocated. Technology maturation occurs when performance on the vertical axis asymptotically approaches a natural or physical limit as engineering effort or time further progresses. Returning to the disk drive case study, while the 14-inch drives approached the asymptote of technology maturation with annual performance improvement limited to 22 percent, the 8-inch drives, designed to optimize storage, benefited from the advances in storage capacity with annual storage capacity improvements of 40 percent. The reduced rate of performance improvement due to technology maturation of the 14-inch drives led existing customers of the sustaining technology to pay a premium for the same incremental performance improvement. Customers of the 14-inch drives paid 1.65 USD per megabyte improvement, 13 percent higher than 1.42 USD of the 8-inch hard drive.

The second component of the failure framework, performance oversupply, occurs as the rate of performance improvement exceeds the performance requirement. In continuing the disk drive case study, Christensen presents the storage capacity of 5.25-inch drives, which exceeded 300 percent of the desktop manufacturers’ performance demand. Meanwhile, the 3.5-inch drives, which initially underperformed, ultimately satisfied the storage demand of the desktop manufacturers by maintaining the rate of storage improvement. Consequently, by 1988, the 5.25- and 3.5-inch drives both met the performance demand of the desktop manufacturers. At this point, as desktop manufacturers no longer required a drive with higher storage capacity, customers began seeking other features such as functionality, reliability, convenience, and price. As demonstrated in the desktop computer market, during this period of performance oversupply, existing customers of sustaining technology migrate to disruptive technology. In 1985, only 1 percent of the desktop manufacturers migrated from 5- to 3.5-inch drives. Within 4 years, however, the 3.5-inch drives accounted for 60 percent of total drive sales. The period of performance oversupply and the continued rate of performance improvement of disruptive technology shifted the basis of competition from storage capacity to other features such as portability and price.

The third component of the failure framework, resource dependence, describes the tendency of a company to allocate resources towards serving the existing customers. As the company depends on satisfying existing customers and in return generating profit to maintain the operational expenses of the company, incumbent companies seek more definitive ways to maintain or increase profitability. As a result, incumbent companies aggressively invest in sustaining technology and attempt to lead existing customers to higher-end products with higher gross margins. Incumbents choose not to allocate resources in disruptive technology where gross margins are lower and the market is unpredictable and smaller. In the case of the hard disk drive industry, Seagate, the incumbents of the 5-inch hard drive market for desktop manufacturers, initially canceled the 3.5-inch drive program and continued innovating the 5-inch model where customers paid higher prices for incremental megabytes of capacity. In 1987, despite the emergence of customer migration from 5-inch to 3.5-inch hard drives, Seagate executives initially disregarded the 3.5-inch market due to the smaller market size of 50 million USD and lower gross margins of 22 percent compared to the current 5-inch market with 300 million USD and 25 percent. By 1991, the 3.5-inch market grew to 700 million USD as new customers such as portable laptops manufacturers emerged, and simultaneously desktop manufacturers further migrated to the 3.5-inch disk drive during performance oversupply of the 5-inch drive. While Seagate eventually attempted to allocate resources for the 3.5-inch drive in 1988, Christensen cites that by 1991 none of Seagate’s 3.5-inch products had been sold to portable/laptop/notebook computers. In 1997, Seagate reported a 550 million net loss in sales.

Why do great companies fail? Christensen’s failure framework illustrates the process of how disruptive technology drives sustaining technology developed by incumbent firms in the mainstream market to fail. Sustaining technology incrementally improves upon the performance measure demanded by the existing customers. In contrast, disruptive technology, while initially underperforming in the performance measure of sustaining technology, introduces a new performance measure. As the performance demand of existing customers is met by both sustaining and disruptive technology, customers seek other features such as portability, functionality, and price which are offered by disruptive technology. At this point, customers of incumbent companies migrate to disruptive technology. As the process of migration continues, incumbent companies are displaced by these entrant firms and disruptive technology prevails.

Fall 2021, EID 367, The Cooper Union

The Discoverers: A History of Man’s Search to Know His World and Himself - Daniel J. Boorstin

Although it is often believed that the widespread of technological innovation is created through individual genius, the acceptance and rejection of a technology are affected by elements in which the creator has no control over. These elements are referred to as human dimensions. The historical accounts presented by Daniel J. Boorstin indicates that human dimensions such as political, religious, and social factors influence the acceptance and rejection of a technology.

In early fifteenth century Portugal, innovation in shipbuilding was stimulated by the political influence of Prince Henry, also known as Henry the Navigator. With political and economic interests in seafaring across the west coast of Africa, Prince Henry utilized national resources to create an infrastructure for shipbuilding both in his court, referred to as the “Research and Development Laboratory” by Boorstin, and in Sagres, a coastal city of south Portugal. Under the leadership of Prince Henry, caravels were optimized for seafaring. The new lateen-rigged caravel design allowed the mariners to travel 55 degrees into the wind compared to the 67 degrees of the previous square lateen-rigged caravel, reducing time at sea by approximately one-third. Furthermore, the introduction of a shallow deck allowed the mariners to explore inshore waters and to beach the ship for carpentering and repairing. The political leadership of Prince Henry ultimately fostered an innovative hub for shipbuilding as Boorstin describes, “under Prince Henry’s stimulus, Lagos, a few miles along the coast of Sagres, became a center of caravel-building,” and attracted talents such as “…the shipbuilders and carpenters, and other craftsmen.” Despite the death of Prince Henry in 1460, Sagres continued to remain the center of shipbuilding innovation. Prince Henry’s maritime agendas for innovation exemplify the influence of politics in fostering an environment for advancement and acceptance of a technology.

In contrast to the political influence that incubated Sagres as a region of maritime development in Portugal, during the same period, anti-maritime policies induced the rejection of shipbuilding technology in China. During the early fifteenth century, the Chinese navy led by Cheng Ho possessed remarkable shipbuilding technology. The Chinese navy had the capacity of deploying up to 317 ships. The largest ships were up to 444 feet in length and 180 feet in beam compared to Prince Henry’s caravel that had 70 feet in length and 25 feet in beam. The Chinese navy also invented watertight bulkheads which partitioned the ship and prevented the spread of fire and water. However, after the death of Cheng Ho’s patron, Emperor Yung Lo in 1424, the next emperor enacted anti-maritime laws and introduced capital punishment for unapproved travels abroad. In contrast to the political support of shipbuilding by Prince Henry in Portugal, the Chinese prohibited shipbuilding which led to “shipyards disintegrated, sailors deserted, and shipwrights fearing to become accomplices in the crime of seafaring.” Consequently by 1474, the fleet of 400 warships diminished to 140, and by 1525, “Chinese officials were ordered to destroy all such ships… perfecting laws and organizing officials to suppress all seafaring.” The juxtaposition of the acceptance and rejection of the shipbuilding technology led by the political authority of these two states demonstrates the significant role of the political dimension in technology adoption and innovation.

The Roman Catholic Church’s early support of Gutenberg’s Bible, which utilized the movable type, is an example of religion’s role in fostering technological innovation. Johannes Gutenberg, a publisher and goldsmith born in the German Roman Empire, invented the first movable-type printing press in 1454. The movable type press initially cost more to build, requiring“early investment as the preparation of a wood block or copper plate was costly.” The Roman Catholic Church, with interest in the mass production of the Bible, provided economic support to Gutenberg. Consequently, with the support of the Church, the printing press industry grew exponentially. In Europe, towns that had printing presses increased from 11 to 238 between the period of 1480 and 1500. The movable type press further propagated beyond the printing of the Bible to the printing of ancient classics such as Aristotle, Caesar, and Ptolemy’s Geography. The religious influence of the Church not only contributed to the development of technology through capital support but sparked a transition from handwritten manuscripts to printed books, which greatly enhanced the scalability of knowledge dissemination in fifteenth century Europe.

While Christian influence spurred the widespread adoption of the movable type across Europe, in contrast, this same religious influence also suppressed map-making technology. In 100 BC, Ptolemy’s astronomical and observation-based cartography and standards were carefully designed and adopted by the Roman Empire. Ptolemy invented the grid system of dividing the earth’s sphere into latitudes and longitudes for eight thousand areas. However, after the Roman Empire was conquered by Christianity, orthodox Christian geographers discarded Ptolemy’s accumulated body of knowledge in cartography. Boorstin accounts, “Christian faith and dogma suppressed the useful image of the world that had been so slowly, so painfully, and so scrupulously drawn by ancient geographers.” Instead, orthodox Christian geographers used literal interpretations from Christian dogma and biblical scripture to formulate their own map. The Christian maps, regardless of their accuracy, became the “guides to the Articles of Faith.” Boorstin describes the period as the “Great Interruption,” in which cartographic innovations were halted and technology retreated. The contrast between the adoption of the movable type and suppression of Ptolemy’s cartography under the influence of Christianity demonstrates the influence of religion on technology acceptance and rejection.

Lastly, the social component within the human dimensions may induce acceptance and rejection of technology. In the city of Lyons, France in 1481, prior to the adoption of the clock, public bells served not only as the broadcasting medium between the people and town councils and churches, but also provided a source of identity and communion for the town’s people and institutions. Boorstin states, “Churches, monasteries, and whole towns were judged by the reach and resonance of the peals from their tower.” Upon the advent of public clocks, the citizens of Lyons themselves demanded installment of a public clock and petitioned their town council stating, “if such a clock were to be made, more merchants would come to the fairs, the citizens would be very consoled and cheerful…” Boorstin indicates that the prior technical knowledge and experience of bell-casting not only encouraged wide adoption but also “advanced the art of the clockmaker and encouraged the elaboration of clocks.” The societal demand for a public clock by the citizens of Lyons demonstrates that the acceptance of technology is influenced by prior exposure to similar technology and social values.

While the acceptance of the public clock was made possible by direct social demands from the citizens of Lyons in 1481, the same clockmaking technology was decisively suppressed with the rise of another social group known as the French guilds in Paris in 1544. As clockmaking technology “enticed men across boundaries of religions, language, and politics” throughout Europe, in Paris, the French guilds only supported an organized association of clockmakers and merchants who enforced the guild’s monopoly against foreigners. The French guilds not only imposed heavy duties on their members, but also inhibited the growth of an ecosystem crucial for innovation such as “restricting numbers of apprentices and of workshops.” While the French government “imported” Henry Sully, the famous clockmaker, and 60 other craftsmen from England to invigorate the clockmaking industry in France, the attempt failed as the French guilds suppressed Sully’s workshops. The juxtaposition between the acceptance of clockmaking technology led by the citizens of Lyons and the suppression by the guilds indicates that the interest of social groups affects the outcome of technological innovation, often more powerfully than the influence of politics.

The rise and fall of innovative spirits in shipbuilding, cartography, printing, and clockmaking were identified in the historical case studies across Portugal, China, and France. The juxtaposition of the acceptance and rejection of these technological innovations due to the various influences of political, religious, and social factors corroborates the thesis that technology acceptance and rejection are a function of both human dimensions and the utility of the product. Prior to building commercial technologies and mastering engineering knowledge, innovators must also consider the various political, religious, and social contexts in which these technologies exist within.

Fall 2021, EID 367, The Cooper Union

The Two Cultures and the Scientific Revolution - C. P. Snow

Steve Jobs, co-founder of Apple Inc., stated, “It’s in Apple’s DNA that technology alone is not enough — it’s technology married with liberal arts, married with the humanities, that yields us the results that make our heart sing.” By recognizing the power of this union between the humanities and the sciences, Steve Jobs created numerous tools that improved the accessibility and user experience of personal computing devices, catapulting Apple Inc. into one of the most innovative companies in the world.

In his lecture at Cambridge University in 1957, later titled The Two Cultures and the Scientific Revolution, C. P. Snow, a British physicist and novelist, warned the British parliament of the widening separation between the two disciplinaries, the humanities and the sciences. Snow asserted that “the intellectual life of the whole of Western society is increasingly being split into two polar groups.” Political leaders who predominantly studied the humanities within the traditional British educational system were ill-equipped to lead the nation in the age of the scientific revolution. Snow argued that the elites often rejected the innovation of scientists, stating that while scientists had the “future in their bones,” the “traditional culture” or the elites responded “by wishing the future did not exist.” Having recognized the threat of this divide to national competition a decade after World War II, Snow demanded for a unification of the two disciplinaries with the final remark, “closing the gap between our cultures is a necessity in the most abstract intellectual sense, as well as in the most practical.”

Steve Jobs’ success in applying his knowledge of calligraphy from the humanities to the development of personal computing devices illustrates the intrinsic role of interdisciplinary diversity in innovation. When the first Macintosh computer was released in 1984, for the first time in the history of machinery, Jobs provided users with a wide assortment of digital fonts and typeface designs such as Helvetica and Times New Roman. The ability to customize the font along with a human-centric user-interface improved the accessibility of personal computing devices. During Job’s commencement speech at Stanford University in 2015, he recalled his calligraphy experience at Reed College in the 1970’s as “…beautiful, historical, artistically subtle in a way that science cannot capture.” He further stated that, had he not studied calligraphy in his 20’s, “personal computers might not have the wonderful typography they do today.” His collective insight into the humanities and software technology changed how humans interacted with machines and demonstrated the significance of interdisciplinary diversity in technological innovation.

Steve Jobs, one of the most innovative entrepreneurs of the 21st century, by combining the knowledge of the humanities and the sciences, improved the accessibility of personal computing devices for users. Conversely, this innovation by Steve Jobs also precipitated the bankruptcies of numerous companies that failed to recognize the threat of interdisciplinary divide as previously warned by C.P. Snow. Therefore, technological leaders and entrepreneurs of today must recognize and embrace interdisciplinarity as an indispensable element of innovation.

Fall 2021, EID 367, The Cooper Union