Back-to-back @ ISR

My paper with Muller Cheung–“Seeker Exemplars and Quantitative Ideation Outcomes in Crowdsourcing Contests”–has been accepted by Information Systems Research. A key insight in this work is how the common practice of showing seeker exemplars in ideation contests to guide and inspire solvers could result in (i) a smaller and less diverse search space explored and (ii) fewer and lower quality ideas submitted by solvers. In other words, the benefits that seeker exemplars bring to ideation contests might come at some expenses to the seekers.

As with all my projects so far, there is a backstory about this paper’s path-to-publication; the details for another time, maybe. A shoutout to the ISR editors and reviewers, who gave us great feedback to bring our work to a higher level. Extremely glad that this research has found a nice home and, at the same time, giving me my first back-to-back ISR publications.

Abstract: Idea seekers in crowdsourcing ideation contests often provide solution exemplars to guide solvers in developing ideas. Solvers can also use these exemplars to infer seekers’ preferences when generating ideas. In this study, we delve into solvers’ ideation process and examine how seeker exemplars affect the quantitative outcomes in solvers’ scanning, shortlisting, and selection of ideas; these ideation activities relate to the Search and Evaluate stage of the Knowledge Reuse for Innovation model. We theorize that solvers’ use of local (problem-related) and/or distant (problem-unrelated) seeker exemplars in the respective search and evaluation activities is affected by their belief and emphasis in contests as well as the influences of processing fluency and confirmation bias during idea generation. Consequently, local and distant seeker exemplars have different effects in different ideation activities. Consistent with our theorizing, the results from an ideation contest experiment show that, compared to not showing any seeker exemplars, providing these exemplars either does not affect or could even hurt the quantitative outcomes in the respective ideation activities. We find that solvers generally search for, shortlist, and/or submit fewer ideas when shown certain seeker exemplars. Moreover, solvers who submit fewer ideas tend to submit lower quality ideas on average. Thus, showing seeker exemplars, which contest platforms encourage and seekers often do, could negatively affect quantitative ideation outcomes and thereby impair idea quality. We discuss the theoretical and practical implications of this research.

The final draft is available at

Paper accepted @ ISR

My paper “Adopting Seekers’ Solution Exemplars in Ideation Contests: Antecedents and Consequences” has been accepted at Information Systems Research.

Abstract: To benefit from the wisdom of the crowd in ideation contests, seekers should understand how their involvement affects solvers’ ideation and the ensuing ideas. This present study addresses this need by examining the antecedents and consequences of solvers’ exemplar adoption (i.e., use of solution exemplars that the seekers provide) in such contests. We theorize how the characteristics of seekers’ exemplars (specifically, quantity and variability) and prizes jointly influence exemplar adoption. We also consider how exemplar adoption affects the effectiveness of the resulting ideas, conditional on solvers’ experience with the problem domain of the contests. The results from a company naming contest and an ad design contest show that exemplar quantity and exemplar variability both positively affect exemplar adoption, but the effects are strengthened and attenuated, respectively, by prize attractiveness. The outcomes of a campaign using the ads from the design contest further show that greater exemplar adoption improves ad effectiveness (in terms of click-through performance), although this is negatively moderated by solvers’ domain experience. We discuss the theoretical and practical contributions of this research to ideation contests.

To view the final draft: Credits to the ISR review team for helping me improve this research. The paper went through a few rounds of review and expanded from one study in the initial submission to three studies in the final version. I also received valuable comments from many colleagues in the community. The end product is definitely a much better piece of work.

The perfect publication record for getting tenure?

  1. Top-tier publication single-authored by yourself — shows that you can work independently.
  2. Top-tier publication with a peer — show that you are a team-player.
  3. Top-tier publication with a student — shows that you can supervise someone.
  4. Top-tier publication with a senior scholar in your field — show that you can be supervised by someone.
  5. Top-tier publication in another field (e.g., quantum physic) — show that you are inter-disciplinary.
  6. Article in a top practitioner journal (e.g., HBR, Sloan Review) — shows that your research has some practical impacts.
  7. A book about that centres around your research topics/streams. (But if it doesn’t get onto the New York Times best-sellers, it doesn’t count.)

And I still haven’t checked off the first item on the list.

Awarded an early career research grant

My project proposal for HK Research Grants Council’s Early Career Scheme (ECS) has gone through successfully. This project focuses on the use of crowd-based contests to acquire graphic designs. Specifically, I plan to examine how the heterogeneity among contestants and in the information provided by contest clients affect the attributes and performances of the design submissions. This project is an extension of my research in crowd-based design contests.

The ECS is for PIs who are in their first three years as Assistant Professor in HK institutions, and I’m glad to be successful in my first attempt applying for this “rookie” grant. I’m also happy for my colleagues whose grant applications went through as well.