Incorporating UX Research in agile-based product development

Victoria Claypoole
UX Collective
Published in
18 min readDec 28, 2021

--

Co-Authors: Stacey Sanchez, Clay Killingsworth, and Kay Stanney

User-Centered Design (UCD) is a broad methodological framework that places the user at the center of all design decisions (Norman, 2013). Instead of starting with a technology-based solution in mind, practitioners seek to first understand the needs of their users and iteratively incorporate them into the design lifecycle as active participants (Abras et al., 2004; Mao et al., 2005). UCD generally involves specification and delineation of the context of use, identification of business requirements and user goals that constitute product success, and iterative creation and evaluation with users of design solutions.

Diagram showing the different components of User Experience.
Photo Cred: Cat Hodges

Just as designers and researchers have their preferred, prescribed methodologies (like, UCD) for the design and development of solutions, so too do software and product developers — one such methodology is Agile. Agile aims for early and continuous delivery of functioning software (Beck et al., 2001), and is typically marked by the relatively little time, cost, personnel, and resources that are required to respond to a change in requirements (Lee & Xia, 2010). This flexibility has made Agile methods among the most popular and widely adopted for software and product development (Rajlich, 2006; Hewlett Packard Enterprise, 2015; digital.ai, 2020). It’s no surprise Agile is so popular in the tech communities; overall, project managers report greater planning efficiency, stakeholder satisfaction, and project success when utilizing Agile (Serrador & Pinto, 2015), as well as enhanced change management, project visibility, and team morale (Bianchi et al., 2018; digital.ai, 2020; Dybå & Dingsøyr, 2008; Lee & Xia, 2010; Schmidt et al., 2014; Schön et al., 2017; Sfetsos & Stamelos, 2010).

However, while Agile allows product teams to deliver value to customers faster, it is not a panacea to all the woes of software and system development. Agile’s cross-functional, self-organizing teams mainly consist of software developers, interface designers, and testers. As such, they can sometimes be limited in scope when developing next-generation, innovative products that have no current design standards or paradigms to lean on. In fact, Agile techniques can pose many limitations to innovation. For instance, having a heavy focus on ‘releasable products’ often means scoping work to be as simplistic as possible rather than focusing on solving problems, discovering solutions, and providing innovation.

In order to overcome these limitations, there is a need to formally integrate UCD methodologies, researchers, and activities (e.g., user journeys, conceptual design, visual design, consideration of nonfunctional properties) into Agile, which, if done well will result in a plethora of benefits (See Table 2). Unfortunately, the inclusion of research roles and related tasking is limited in current Agile frameworks; Silva and colleagues (2019), for example, describe more than ten Agile project management frameworks, none of which specifies roles for user researchers, leaving UCD researchers (e.g., UI/UX designers, Human Factors Engineers, Cognitive Psychologists, etc.) and software developers at odds with one another on how to best develop solutions for end users. This article provides a brief review of the most common challenges associated with integrating UCD and Agile, as well as tips for overcoming these obstacles.

A table showing tips for combining UCD and Agile
Summary of Tips for Combining Agile and UCD

Challenge #1

To provide effective and continuous delivery of functioning software, Agile prescribes several ‘ceremonies’ (which is just a fancy word for ‘team meetings’) to keep the cross-functional team on-track during their Sprint (a time-boxed boundary aimed at achieving specific goals that typically lasts two to four weeks; Schwaber & Sutherland, 2017). Most notably, these ceremonies include Sprint Planning, Daily Standups, Sprint Reviews, and Sprint Retrospectives. UCD researchers are likely to be unaware of their suitability within Agile, and therefore may not initiate integration into the team (Abrahamsson et al., 2003) — especially as most Agile frameworks do not even specify roles and responsibilities for researchers (Silva et al., 2009). The first challenge for UCD researchers to overcome is figuring out how exactly they fit within these ceremonies. To start (Pirro, 2019):

Tip #1: Ensure research activities can be broken up into several layers of sub-activities, timebox and cost each activity, and associate each layer with a tangible result, which, will increase the likelihood that UCD activities will be adopted and supported within Agile development.

Tip #2: Plan for small-scale experiments, followed by immediate data processing, interpretation, and identification of implications to product design.

Tip #3: When possible, increase the number of variables and associated data points to be collected, and ensure data drive product design outcomes.

Sprint Planning

Sprint Planning occurs on the first day of a Sprint and entails determining i) what work the cross-functional Agile team will commit to and ii) the tasks required to drive this work to completion (Schwaber & Sutherland, 2017). The biggest challenge UCD researchers face with Sprint Planning is communicating what work they will accomplish and how it relates to the overall goal of the Sprint. For example, a UCD research activity could be ‘research which technology platform leads to the most effective training.’ This activity provides value to the overall goal of the Sprint because it provides information and criteria for the software development tasks that occur in later Sprints. Effectively communicating such interdependencies is critical to successful integration of UCD research activities into Agile.

Tip #4: Determine what UCD research dependencies exist for successful execution of software development; communicate, track, and report results to provide value during the Sprint.

Tip #5: Consider starting with an upfront user-focused research Sprint (i.e., review background literature, conduct competitive analysis, early customer interviews, site visits, focus groups, etc.), which may be referred to as “sprint zero” and aims to develop evidence-based user requirements and a guiding vision for the design of the product that informs product strategy and ongoing decision-making across the development effort.

Tip #6: For each sprint, define the objective to be achieved (e.g., what will be investigated via an experiment; what information will be obtained via survey or Subject Matter Expert (SME) interview; what will a usability evaluation focus on, etc.), the duration allotted for each objective, and the expected outcomes.

Daily Standup

Each day is marked by a brief meeting with the Agile team (e.g., “stand-up” or “daily scrum”), and allows each member to assess progress, set goals for the day, and identify any obstacles in order to mitigate current or potential problems. UCD researchers may find that repeating “I read literature on these focus areas” can become repetitive and not exactly provide communicative value. Over time, it may even feel as if no value is being provided — both from the perspective of the UCD researcher and the other members of the cross-functional team. Instead, in their daily status update, UCD researchers should include outcomes, findings, data results, etc. and indicate how each activity is contributing to the product, ultimately resulting in increased group cohesiveness by providing visibility into research activities and expected outcomes (Dingsøyr et al., 2016). By clearly communicating progress and outcomes, that explicitly convey value, all members of the Agile team have a better, shared understanding of the work completed by (and contributions of) UCD researchers.

Tip #7: Provide descriptive, but not exhaustive, status updates that convey progress and value to the Sprint.

Tip #8: Ensure status updates address: what was done to contribute to the sprint goal? What is the next set of UCD objectives that contributes to the goal? What are the roadblocks, if any, to meeting UCD objectives?

Sprint Review

Just, as the name implies, the Sprint Review is a ceremony in which the entire cross-functional team gets together and reviews (or, demonstrates) their completed work from the current sprint. Typically, software developers will demonstrate their new, functioning code. But what about UCD researchers? What should they demonstrate during the Sprint Review? Deciding what to present, and the best format for the presentation is typically the biggest challenge for UCD researchers during a Sprint Review — along with getting other members of the cross-functional team to actually care about UCD findings. In this vein, we suggest being extremely upfront about the business value and how UCD outcomes relate to the success of the product (the “so-what”). Did you conduct a literature review on the efficacy of digital avatars? Great! But, no one wants to listen to a 30-minute presentation of your literature review. Simply, very succinctly summarize your methodology and then highlight the overall outcome and how it provides value to the product. You should be able to communicate your high-level, but high-value findings in five minutes or less. Let’s say instead of a literature review, you conducted A/B testing of two design options. Again, the presentation of the results should be five minutes or less. Feel free to succinctly present the methodology, but mainly focus on the outcomes of the testing and how they directly affect development-related dependencies. In this way, you can provide information to the development team that builds an understanding of the rationale behind design decisions, which helps fulfill the human need for relatedness (Ryan & Deci, 2000) and can increase intrinsic motivation to build the best possible product (Baard et al., 2004).

Tip #9: Present the “business value” and the “so-what” of UCD research findings. Connect the results of UCD tasking with how it directly affects the product/solution, and importantly, explicitly describe how the findings are relevant moving forward.

Sprint Retrospective

The Sprint Retrospective provides a platform for the team to discuss what went well, what didn’t go well, and what improvements can be made to the process (Schwaber & Sutherland, 2017). Additionally, a key goal is to identify and develop mitigation strategies for points of inefficiency. When UCD researchers are included as members of the cross-functional team, this affords the opportunity for these critical members to voice their concerns and learn from their teammates. For example, perhaps an executive brief was not presented well in a sprint review. This allows the entire cross-functional team to identify a mitigation strategy to improve the process for the next sprint. The retrospective allows UCD researchers to learn and grow professionally within their organization and to provide process improvement suggestions for the betterment of the entire team. Go into the Sprint Retrospective ready to hear constructive criticism and be open to your teammates’ suggestions for improvement.

Tip #10: Discuss UCD results and any data points that are not in line with expectations (e.g., we thought the design was going to be engaging, but engagement scores were very low, thus design modification is recommended).

Tip #11: Provide details and foster analytical brainstorming that drives design iteration; be willing to redefine design flows that fail to meet UCD objectives; keep updating the understanding of design philosophy and rationale.

Challenge #2 — Becoming Part of the Cross Functional Team

Just as there are challenges in determining how UCD activities can be integrated within Agile ceremonies, so too are there challenges with how the UCD researchers themselves fit in with other members of the cross-functional team. For instance, the heavy focus on achieving a potentially releasable product each sprint may discount the value of investing time upfront for problem solving, discovery, and innovation — ultimately preventing UCD researchers from communicating their need for time to derive R&D innovation requirements and hinder the modification of time boxes to allow for reflection on R&D findings and innovation goals.

To overcome this, UCD researchers must be extremely communicative. Communicate how setting aside time for problem solving provides value to the product — for example, show how a tiny bit of research now could save time, money, and rework later down the road; after all, understanding user and stakeholder needs makes for happier customers and better solutions. Previous research has found that as little as three to five user tests can reveal huge usability issues and save time during product increment development (Nielsen and Landauer, 1993; Nielsen, 2000). Communicate how the inclusion of research provides additional business value not otherwise provided. Bonus points — collect data throughout the lifecycle of a product to report back on. Nothing communicates more effectively than real data, real monies saved, and real happy customers.

Tip #12: Communicate individual responsibilities of UCD researchers and demonstrate added business value.

Tip #13: From the very outset, have all members of the cross functional team contribute to the product vision and maximize communication of design philosophy and any related changes throughout production.

Challenge #3 — Overcoming Cultural Differences

Challenges in integrating research roles and associated tasking in Agile may arise from differences in culture between traditional Agile team members and researchers. Perceived user-centered needs versus business impacts, which are important motivators for cooperation (Garcia et al., 2019; Siegel et al., 2003), are bound to be vastly different between these groups. In fact, it is not uncommon for user-centered activities to be considered optional by traditional Agile team members (Raison & Schmidt, 2013). Additional expected sources of conflict and inefficiency between these groups include role ambiguity, inconsistent terminology, and failure to reflect and learn from process successes and failures (Bergendahl & Ritzen, 2013). While reflecting on the process is an integral part of Agile (e.g., Sprint Retrospectives and Reviews), avoiding role ambiguity and standardizing terminology will be integral to successful research role teaming.

Also contributing to this culture clash are differences in the model that Agile versus research are expected to follow. Agile is all about building the thing right and response efficiency (i.e., on-time completion, on-budget completion, software functionality). UCD activities, on the other hand, focus on designing to meet user goals, i.e., conducting research that informs how best to build the “right” thing (Bruneel et al., 2010; Tartari et al., 2012). Thus, to be more synergistic, Agile teams must allow for time for research, and researchers must bound their activities to those that inform product design.

Tip #14: When starting a development effort, or first integrating a UCD researcher into an Agile team, host a kickoff meeting with the entire team that introduces the new role, the new team member, the added expertise, their defined role, what they are responsibly for, how they will contribute to the success of the project/product, and any other relevant business values.

Tip #15: Clearly lay out UCD researcher roles and responsibilities; including, expected outcomes and development-related dependencies, as well as realistic, expected timeframes for work completion.

Tip #16: To realize successful integration of research into Agile, allocate time needs to achieve shared understanding and define dependencies between functional areas (Schön et al., 2017) — this will inform all members of the cross-functional team how their work is interdependent and provide better understanding into why a UCD researcher is needed.

Challenge #4 — Overcoming Methodological Differences

Most UCD researchers are accustomed to doing just that — research. Human subjects research, for example, traditionally collects a sample size specified a priori, running statistical hypothesis tests only once all data have been collected. This is good practice in confirmatory analyses for minimizing researcher bias introduced by early trends in data and reducing the temptation of “p-hacking,” or allowing the outcome of a statistical test to influence data collection or analysis decisions (Head et al., 2015). However, in product design settings this practice comes with high opportunity cost, as non-research members of the team may find themselves waiting for the results of ongoing research with little means to contribute to or move forward with product development. Moreover, it can be very costly and time-consuming to conduct such activities — especially if they provide very little overall value to product development.

UCD researchers must bound their activities such that they are designed to inform product requirements rather than to achieve statistical significance or produce generalizable knowledge. A user test with four participants would never pass statistical muster, but it can absolutely provide invaluable insights to the product team — Nielson has repeatedly demonstrated that as a few as three to five user tests can uncover most usability issues within a product increment (Nielsen and Landauer, 1993; Nielsen, 2000). Further, extending such a test to, say, 30 participants represent a substantial increase in time and resource costs for relatively little increase in value. Researchers engaged in this sort of work must remember that the goal is not advancing the underlying science but maximizing return on investment (ROI) by quickly identifying the most important product considerations or changes that need to be made. Therefore, when conducting research activities during the development stage of a product’s lifecycle, think pilot testing instead of full-blown experiments with control groups and large data sets. However, once that product has been released, now may be the perfect time to conduct a more in-depth experiment to determine shortcomings within the product that can be improved in later releases. It’s important to remember what type of research is most appropriate in the lifecycle stages of product development (i.e., market analysis, user testing, A/B testing, product research, etc.) — and to stay within the pre-determined timeframes so as not to diminish the ROI of UCD research.

Tip #17: Understand which research activities are most appropriate and provide the highest ROI during the various stages of a product’s development lifecycle — save elaborate experiments for the iteration phase and focus on quick user-testing for the design and development stages.

Challenge #5 — Understanding When to Integrate Specific Research Activities

Perhaps the greatest challenge to integrating Agile and UCD is understanding when during the product lifecycle to conduct various research activities and how to actually integrate them with the Agile methodology of ‘continuous delivery’ of working code. Most, if not all, of UCD research activities can (and should) be done during Sprints. So, actually integrating them within Agile is a matter of determining when they should be done, and then populating the tasks within appropriate Sprints. To accommodate the integration of research into Agile, the “shippable product increment” technique may need to be relaxed, which can be readily facilitated through the inclusion of a sprint zero. At their core, Sprints are designed to achieve a specific aim, which has traditionally resulted in new, functioning code. However, UCD research activities will result in new ideas, design recommendations, or research considerations that inform rather than constitute a product increment. To co-exist, this nuance must be accepted and valued by Agile teams. For example, a company may be developing training using an emerging technology, virtual reality (VR). Software developers may have Sprint tasking in their product backlog to develop requirements for the intended hardware. However, the team may not know which VR platform is best suited to this training program, and thus cannot actually complete the development (or worse, complete development on a poor platform and then have to redevelop on the best platform, once identified by UCD researcher, later on). UCD researchers can execute Sprint tasking to review the literature — or, if the literature is lacking, conduct their own small-scale investigation — and provide a recommendation. UCD research tasking can be easily included within Sprints to ensure the results of research activities are incorporated into product increments.

Tip #18: As the Agile process includes planning, design, development, test, and iterate stages within both Sprints and at the product lifecycle level, research tasking should be incorporated into all of these stages, at both levels, as appropriate.

A table showing common UXR activities
Common UCD Research Activities and When to Do Them During a Product’s Lifecycle.

References

Abrahamsson, P., Warsta, J., Siponen, M. T., & Ronkainen, J. (2003). New directions on agile methods: A comparative analysis. In 25th International Conference on Software Engineering, IEEE, (pp. 244–254). Portland, Oregon.

Abras, C., Maloney-Krichmar, D., & Preece, J. (2004). User-centered design. Bainbridge, W. Encyclopedia of Human-Computer Interaction. Thousand Oaks: Sage Publications.

Azevedo, E. M., Alex, D., Olea, J. M., Rao, J. M., & Weyl, E. G. (2018). A/b testing. In Proceedings of the Nineteenth ACM Conference on Economics and Computation. ACM.

Baard, P. P., Deci, E. L., & Ryan, R. M. (2004). Intrinsic need satisfaction: a motivational basis of performance and weil‐being in two work settings. Journal of Applied Social Psychology, 34(10), 2045–2068.

Barlow, J. B., Giboney, J., Keith, M. J., Wilson, D., Schuetzler, R. M., Lowry, P. B., & Vance, A. (2011). Overview and guidance on agile development in large organizations. Communications of the Association for Information Systems, 29(2), 25–44.

Beck, K. M., Beedle, M., van Bennekum, A., Cockburn, A., Cunningham, W., Fowler, M., Grenning, J., Highsmith, J., Hunt, A., Jeffries, R., Kern, J., Marick, B., Martin, R. C., Mellor, S., Schwaber, K., Sutherland, J., & Thomas, D. (2001). Manifesto for Agile Software Development. https://agilemanifesto.org/

Bergendahl, M. N., & Ritzen, S. (2013). Strategies for Mutual Learning Between Academia and Industry. In A. Chakrabarti & R. V. Prakash (Eds.), ICoRD ’13 (pp. 667–677). Springer India.

Bianchi, M., Marzi, G., & Guerini, M. (2018). Agile, Stage-Gate and their combination: Exploring how they relate to performance in software development. Journal of Business Research, 110, 538–553. https://doi.org/10.1016/j.jbusres.2018.05.003

Bruneel, J., D’Este, P., & Salter, A. (2010). Investigating the factors that diminish the barriers to university–industry collaboration. Research Policy, 39(7), 858–868. https://doi.org/10.1016/j.respol.2010.03.006

Diamond, D. H. (2019). How To Write Better User Stories Using UX Research. Retrieved from https://www.userinterviews.com/blog/how-to-write-better-user-stories-using-ux-research

Digital.ai (2020). 14th annual state of agile report. www.stateofagile.com

Dingsøyr, T., Fægri, T. E., Dybå, T., Haugset, B., & Lindsjørn, Y. (2016). Team performance in software development: research results versus agile principles. IEEE software, 33(4), 106–110.

Dybå, T., & Dingsøyr, T. (2008). Empirical studies of agile software development: A systematic review. Information and Software Technology, 50(2008), 833–859. https://doi.org/10.1016/j.infsof.2008.01.006

Garcia, R., Araújo, V., Mascarini, S., Santos, E. G., & Costa, A. R. (2019). How the Benefits, Results and Barriers of Collaboration Affect University Engagement with Industry. Science and Public Policy, 46(3), 347–357. https://doi.org/10.1093/scipol/scy062

Gordon, S. E. (1991). Front-end analysis for expert system design. In Proceedings of the Human Factors Society Annual Meeting (Vol. 35, №5, pp. 278–282). Sage CA: Los Angeles, CA: SAGE Publications.

Head, M. L., Holman, L., Lanfear, R., Kahn, A. T., & Jennions, M. D. (2015). The extent and consequences of p-hacking in science. PLoS Biol, 13(3), e1002106.

Hewlett Packard Enterprise. (2015). Agile is the new normal: Adopting Agile project management. https://softwaretestinggenius.com/docs/4aa5-7619.pdf

Kaim, R., Harting, R.-C., & Reichstein, C. (2019). Benefits of agile project management in an environment of increasing complexity- a transaction cost analysis. In I. Czarnowski, R.J. Howlett, & L.C. Jain (Eds.), Intelligent Decision Technologies (pp.195–204). Springer Nature Singapore.

Knopf, J. W. (2006). Doing a literature review. PS: Political Science & Politics, 39(1), 127–132.

Kostić, Z. (2018). Innovations and Digital Transformation as a Competition Catalyst. ЕКОНОМИКА, 64(1), 13–23.

Larman, C. (2004). Agile & Iterative Development: A Manager’s Guide. Boston: Addison-Wesley.

Lárusdóttir, M., Cajander, Å., Erlingsdottir, G., Lind, T., & Gulliksen, J. (2016). Challenges from Integrating Usability Activities in Scrum: Why Is Scrum so Fashionable?. In Integrating User-Centred Design in Agile Development (pp. 225–247). Springer, Cham.

Lee, G. & Xia, W. (2010). Toward Agile: An Integrated Analysis of Quantitative and Qualitative Field Data on Software Development Agility. MIS Quarterly, 34(1), 87–114.

Leffingwell, D., & Widrig, D. (2003). Managing Software Requirements: a use case approach. 2003. ISBN-13, 978–0321122476.

Mao, J. Y., Vredenburg, K., Smith, P. W., & Carey, T. (2005). The state of user-centered design practice. Communications of the ACM, 48(3), 105–109.

Maxwell, J. A. (2008). Designing a qualitative study. The SAGE handbook of applied social research methods, 2, 214–253.

Nielsen, J., & Landauer, T. K. (1993). A mathematical model of the finding of usability problems. In Proceedings of the INTERACT’93 and CHI’93 conference on Human factors in computing systems (pp. 206–213).

Nielson, J. (2000). Why You Only Need to Test with 5 Users. Retrieved from https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/

Norman, D. (2013). The design of everyday things: Revised and expanded edition. Basic books.

Pirro, L. (2019, Apr 10). How agile project management can work for your research. Nature, https://doi.org/10.1038/d41586-019-01184-9

Poe, L. F., & Seeman, E. (2020). An Empirical Study of Post-Production Software Code Quality When Employing the Agile Rapid Delivery Methodology. Journal of Information Systems Applied Research, 13(1), 4–11.

Raison, C., & Schmidt, S. (2013). Keeping User Centered Design (UCD) Alive and Well in Your Organization: Taking an Agile Approach. In A. Marcus (Ed.), Design, User Experience, and Usability. Design Philosophy, Methods, and Tools. DUXU 2013. Lecture Notes in Computer Science, vol 8012. Berlin, Heidelberg: Springer. https://doi.org/10.1007/978-3-642-39229-0_61

Rajlich, V. (2006). Changing the paradigm of software engineering. Communications of the Association for Computing Machinery (ACM), 49, 67–70. https://doi.org/10.1145/1145287.1145289

Royce, W. W. (1970). Managing the Development of Large Software Systems. Proceedings of IEEE WESCON, 1–9.

Ryan, R. M., & Deci, E. L. (2000). Self-Determination Theory and the Facilitation of Intrinsic Motivation, Social Development, and Well-Being. American Psychologist, 55(1), 68–78. https://doi.org/10.1037110003-066X.55.1.68

Schmidt, C., Kude, T., Heinzl, A., & Mithas, S. (2014). How Agile Practices Influence the Performance of Software Development Teams: The Role of Shared Mental Models and Backup. Proceedings of the Thirty Fifth International Conference on Information Systems, 1–18.

Schön EM., Winter D., Escalona M.J., & Thomaschewski J. (2017). Key Challenges in Agile Requirements Engineering. In Baumeister H., Lichter H., Riebisch M. (Eds.), Agile Processes in Software Engineering and Extreme Programming. XP 2017. Lecture Notes in Business Information Processing, vol 283. pp. 37–51. Cham: Springer. https://doi.org/10.1007/978-3-319-57633-6_3

Schwaber, K. (1997). SCRUM Development Process. In J. Sutherland, C. Casanave, J. Miller, P. Patel, & G. Hollowell (Eds.), Business Object Design and Implementation (pp. 117–134). Springer. https://doi.org/10.1007/978-1-4471-0947-1_11

Schwaber, K., & Beedle, M. (2002). Agile Software Development with Scrum. Upper Saddle River, NJ: Prentice-Hall.

Schwaber, K., & Sutherland, J. (2017). The Scrum Guide. https://www.scrumguides.org/docs/scrumguide/v2017/2017-Scrum-Guide-US.pdf

Serrador, P., & Pinto, J. (2015). Does Agile work? — A quantitative analysis of agile project success. International Journal of Project Management, 33. https://doi.org/10.1016/j.ijproman.2015.01.006

Sfetsos, P., & Stamelos, I. (2010). Empirical Studies on Quality in Agile Practices: A Systematic Literature Review. In F. Brito e Abreu, J. Pascoal de Faria, & R. J. Macado (Eds.), 2010 Seventh International Conference on the Quality of Information and Communications Technology (pp. 44–53). Porto, Portugal. https://doi.org/10.1109/QUATIC.2010.17

Siegel, D., Waldman, D., Atwater, L., & Link, A. (2003). Commercial Knowledge Transfers from Universities to Firms: Improving the Effectiveness of University–Industry Collaboration. The Journal of High Technology Management Research, 14, 111–133. https://doi.org/10.1016/S1047-8310(03)00007-5

Silva, F. B., Bianchi, M. J., & Amaral, D. C. (2019). Evaluating combined project management models: Strategies for agile and plan-driven integration. Product Management & Development, 17(1), 15–30. https://doi.org/10.4322/pmd.2019.003

Taylor, G. R. (Ed.). (2005). Integrating quantitative and qualitative methods in research. University press of America.

Tartari, V., Salter, A., & D’Este, P. (2012). Crossing the Rubicon: Exploring the factors that shape academics’ perceptions of the barriers to working with industry. Cambridge Journal of Economics, 36(3), 655–677. https://doi.org/10.1093/cje/bes007

--

--

Dr. Victoria L. Claypoole is a Human Factors and Cognitive Psychologist with an extensive background in Product Design.