Skip to main content
  • Position paper
  • Open access
  • Published:

Learning progressions: framing and designing coherent sequences for STEM education

Abstract

The coupled influences of scholarship in the fields of Psychology, Philosophy, and Pedagogy beginning in the 1950s, set in motion the emergence of new images, methodological perspectives, theories, and design principles about learners and learning. Advances in cognitive and sociocultural psychology, shifting images of the nature of science, recognition of the importance of disciplinary discourse practices in learning, the scaffolding of learning by tools and technologies, along with the adoption of ‘assessment for learning’ instructional strategies are among the factors that have led researchers and practitioners to advance positions that learning ought to be coordinated and sequenced along conceptual trajectories, developmental corridors, and learning progressions (LP). Following opening Introduction and LP Research Framework sections that provide an overview of the runup to LP research and development, I then turn to future research discussions and implications targeting five LP domains: Using Knowledge with Scientific Practices; Instructional Pathways – Early Childhood Learning; Teaching Experiments – Science and Mathematics; Upper/Lower Anchors for Measuring Progress; and Concepts & Practices. The Conclusion section points to overarching challenges for researchers, planners, and teachers in STEM education. There is much to learn for all!

Introduction

Science education practices and policy over the past 100 years reveals a narrative of oscillating changes and developments (Rudolph, 2019). The 1950s was a pivotal turning point, a disruptive decade with significant changes to psychological, philosophical, and pedagogical frameworks that informed science education. For psychology, the emergence during the 1960s and 1970s of cognitive and constructivist learning theories for science and mathematics learning began to challenge the tenets of behavioral learning theory (National Research Council, 1999). In philosophy, the adoption of historical and cognitive frameworks for depicting the growth of scientific knowledge challenged logical positivism images for building and refining scientific theories, models, and explanations (Giere, 1988; Kuhn, 1970). The major pedagogical shift regarding teaching science was adopting an ‘enquiry into enquiry’ stance for science education (Duschl, 1990; Rudolph, 2019; Schwab, 1962). This change summarily influenced and ignited considerations regarding the central roles that epistemic and scientific practices (e.g., systems thinking, modeling, argumentation, computational thinking) have on learning science.

The changes begun in the 1950s ignited ideas that science education needed to be more than learning what we know; mere mastery learning. The new learning goals seek to understand and engage with how ideas and concepts build together. That is, taking a ‘science as practices’ stance that examines how we came to know the natural world and about using evidence to explain why we believe a scientific explanation in the face of alternative competing ideas. The shift was toward ‘using knowledge’ and engaging learners in discipline-based epistemic and scientific practices. The new emerging models led researchers and educators to advance positions that learning ought to be coordinated and sequenced over longer periods of time; not over days of lessons but rather over months and years of development. The emergent perspectives include conceptual trajectories (Driver, Leach, Scott, & Wood-Robinson, 1994), developmental corridors (Brown, 1997), learning trajectories (Simon, 1995) and learning progressions (National Research Council, 2006, 2007). And, importantly, the developmental psychology research (National Research Council, 2007) indicated that learning progressions can start at early ages well before children enter formal schooling.

Over the course of one-decade 1999–2009, the US National Research Council assembled distinguished panels of scholars to develop synthesis reports on learning research. Targeted topics included social, cognitive, and behavioral models of learning, assessing learning, test standards, reading education, mathematics education, and science education. Titles included How People Learn; Knowing What People Know; Starting Out Right; Adding It All Up; Systems for State Science Assessment, Taking Science to School. Each synthesis report contributed to the emergent thinking about developmental pathways, teaching sequences, learning trajectories (LT) and learning progressions (LP).

Two highly influential reports released by the Center for Policy Research in Education (CPRE) further synthesized and provided overviews of the evidence and rationales for extant reasoning about and conjectures regarding potential contributions that the design of coherent sequences of instruction might achieve. The first CPRE report Learning Progressions in Science: An evidence-based approach to reform (Corcoran, Mosher, & Rogat, 2009) was followed shortly by Learning Trajectories in Mathematics: A foundation for Standards, Curriculum, Assessment and Instruction (Daro, Mosher, & Corcoran, 2011).

The CPRE reports challenged the established thinking that government agencies and school districts “shouldn’t prescribe classroom practices to frontline educators …. . The concept of learning progressions offers one promising approach to developing the knowledge needed to define the “track” that students may be on, or should be on.” (Daro et al., 2011). The LT report goes on to summarize the hoped-for potential of LPs in science. One, is LP can inform teachers about what to expect from their students. LP’s provide an empirical basis for choices about when to teach what to whom. LP identify key waypoints along the path in which students’ knowledge and skills are likely to grow and develop in school subjects. Such waypoints could then form the backbone for curriculum and instructionally meaningful assessments and performance standards.

The agenda now for education researchers and curriculum developers is scripting and enacting the design principles for preK-16 curriculum, instruction, and assessment models. More specifically, it is taking the research on student learning from the domains of science education, developmental psychology, sociocultural theory, epistemic cognition, and nature of science to develop coherent instructional sequences that span grade bands, integrate STEM disciplines, and ultimately inform coherent progressions of teaching and learning.

In 2011 the NSF convened the “Learning Progressions Footprint Conference” with two purposes (1) to examine the impact NSF research was having on LP (science) and LT (mathematics) research, and, (2) to provide guidance for NSF’s future investments in LP and LT research and development. Three contexts were taken up:

  1. 1.

    Large scale assessment, informing designs and tasks that monitor reasoning and progress toward learning goals;

  2. 2.

    Classroom practices, guiding the design of curriculum, instruction, and formative assessments as well as professional development; and

  3. 3.

    State and national standards development, examining effectiveness of educational systems at local, state, and national contexts.

Research on the design of learner-centered curricula (Bransford et al., 2006; Duschl & Gitomer, 1991; Krajcik, Blumenfeld, Marx, & Soloway, 1994; Linn, 1995) and on the design of student-centered learning environments (Cobb, Confrey, diSessa, Lehrer & Schauble, 2003) provided guidelines on how to scaffold learning opportunities into classrooms. This focus on student-centered learning sequences brought to light the need for developing performance assessment and formative assessment tasks that would make learners’ thinking visible so as to inform whether and how learning was progressing and enable helpful feedback and supports for subsequent learning. The shift is away from teacher-centered and toward learner-centered classrooms. The research focus continues to be about developing and adopting learning progressions (LP) that are framed around (1) the learning goal(s) or outcome; (2) the learners’ developmental progressions in thinking, reasoning, and learning; (3) the design of assessments that inform feedback on learning; and (4) the design of activities and tasks that are accessible to learners and the sequence of instruction that promotes increasingly more sophisticated ways of knowing and thinking.

The next two sections examine some of the unresolved issues in LP research and development. The discussion focuses on a set pf selected LP -Topics and the accompanying theoretical frameworks that reveal the ‘theme and variation’ of varying LP design and methodological perspectives. The topic selection of LP research and development is drawn from four sources:

  • “Learning progressions and teaching sequences: A review and analysis” (Duschl, Maeng, & Sezen, 2011);

  • “Learning Progressions Footprint Conference” (National Science Foundation, 2012);

  • “Geoprogressions – Learning progressions for maps, geospatial technology and spatial thinking: A research handbook” (Solem, Huynh, & Boehm, 2014);

  • Future of education and skills 2030: Curriculum analysis – A synthesis of research on learning trajectories/progressions in mathematics (Confrey, 2019)

For each ‘LP-Topic’, a brief description is provided first to contextualize the conceptual and methodological frameworks therein. Each LP-Topic is then followed by an ‘Implications for Future Research’ statement that focuses on LP issues and research agendas.

Learning progression research frameworks

Any analyses of LP research studies must address two overarching questions:

  • How well developed is the identification of the foundational knowledge that facilitates and advances pathways of reasoning and understanding?

  • How thorough is the description of the teacher mediated learning pathways?

We should be asking research questions that make it possible to judge a LP as complete, near complete, incomplete, and flawed LP. Advancing the LP research agenda will necessitate attaining some degree of consensus regarding the guiding conceptions that frame LP research questions, curriculum frameworks, instructional methods and formative and summative assessments. Lehrer and Schauble (2015) identify several issues and questions scholars needs to consider to extend the scope and potential of LP research. For establishing clearer aims and goals; What to include in long-term (multiple years or grades) LP designs? For determining the granularity of LP description; How to support conceptual development in LP? and How to generate and test LP?

Driver et al. (1994) argue how cross-age studies of conceptual development can inform curriculum planning. They posit there are three factors influencing students conceptual reasoning and learning pathways; “changes in students’ ontologies within specific domains, changes in reasoning strategies, and changes in epistemological commitments” (p. 97). This suggests a need for more longitudinal research studies to understand and characterize students’ learning pathways and the ways students’ changing ontologies, reasoning strategies, and epistemic commitments influence learning.

The introduction of science education Learning Progressions (LP) and math education Learning Trajectories (LT), as described in the Introduction section above, coincided with the emergence of new curriculum, instruction, and assessment frameworks and national policy discussions and reforms about rethinking science and mathematics, teaching and learning.

The guiding frameworks adopted by researchers when developing a LP naturally have an effect on shaping the LP design and the validation process. Duschl et al. (2011) found that LP research reports guided by ‘achievement measurement models’ (e.g., State Tests, NAEP, TIMMs) tend to lead to curriculum designs that are days or week-long. Furthermore, the instructional frameworks adopt a ‘fix it’ model to address students’ novice/expert conceptions. Not surprisingly, considerations for scientific and epistemic practices are typically absent as are expectations for students using knowledge with practices.

In contrast, LP research reports employing ‘cognitive learning assessment’ models such as the ‘assessment triangle’ (National Research Council, 2001) adopt longer instructional sequences timeframes employing multiple units of study that are implemented across on months and years. Here instruction frameworks and learning performances adopt a ‘work with it’ learning model grounded in student’s intuitions and innate capacities for reasoning and sensemaking. Here, there are strong commitments to using knowledge linked to scientific practices; e.g., argumentation, data analysis, computational thinking, and modelling.

Huynh and Gotwals (2014) reporting as part of a project developing a geography LP through the lenses of spatial thinking, mathematics, and geospatial technology. The report reveals several important issues regarding engagement in LP research when bridging between practices, content, and tool-use. One issue is how to begin creating a LP. They argue for the coordination of three qualities for:

  1. (i)

    Have conceptual coherence of how learners/teachers new to a domain come to master that domain;

  2. (ii)

    Build on and be compatible with the existing learning research learning findings in a domain; and,

  3. (iii)

    Including processes for measuring and validating students’ learning.

A second issue that arises concerns the multiple strategies available for determining the starting/ending points or lower/upper anchors for the design of LPs. One method is to conduct cross-sectional studies of learners’ understandings at different stages and ages of development to establish a hypothetical progression for instructional pathways (c.f., Duncan, Rogat, & Yarden, 2009; Mohan, Mohan & Utall, 2014). A second method, is to examine specific instructional activities and conduct teaching experiments to ascertain what students are capable of learning with proper supports and opportunities. (c.f., Cobb, McClain, & Gravemeijer, 2003; Lehrer & Schauble, 2012). A third method is to determine the Upper/Lower anchors for measuring progress, or progress variables. Here questions focus on what does it mean to be at a ‘level’ and how does one progress to higher levels? (c.f., Morell, Collier, Black, & Wilson, 2017). Yet another issue has been labeled the ‘messy middle’ problem. Here the challenge is that there are multiple and variable pathways that learners can traverse; as alluded to above by Driver et al. (1994) and Lehrer and Schauble (2015). Such diversity among learners can be a quandary for group/class instructional frameworks.

Here, then, a discussion of five domains relevant for conducting LP research: Using Knowledge with Scientific Practices; Instructional Pathways – Early Childhood Learning; Teaching Experiments – Science and Mathematics; Upper/Lower Anchors for Measuring Progress; and Concepts & Practices.

LP research domains - topics & implications

Using knowledge with scientific practices

Research on designing LP for science and epistemic practices (e.g., building and refining theories, models, and mechanisms; systems and spatial thinking; argumentation) is needed to support deep learning during project-, problem-, or place-based inquiry contexts. Science LP researchers might consider adopting the ‘learning models’ approach for LT in mathematics education (see Teaching experiments – science and mathematics below). Historical inquiry methods (Duschl, 1990; Osborne, 2018) as well as contemporary ethnographies and case studies of epistemic cultures (Knorr-Cetina, 1999) can also generate LP designs. For science practices LP, the broad focus is examining the developmental pathways and science skills and reasoning associated with building and refining knowledge. Such epistemic practices are critical for engaging in the critique and communication of scientific inquiry and for developing understandings about the nature of science. Consider two examples for research on science and epistemic practices: Spatial Thinking, Reasoning with Evidence.

Spatial thinking

Reasoning with and about spatial data or phenomena and using spatial technology is increasingly a part of the daily lives and the technical tools we use. Mohan, Mohan and Uttal (2014) report on the process they used for designing a ‘Geoprogression’ LP for maps, geospatial technology and spatial thinking. The approach included three components 1) Identifying the Knowledge Space, 2) Defining the Domain of a Spatial Thinking LP, and 3) Learning Progression Anchors and Progress Variables. The third component serves as a good example of the complex decision-making that often arises when striving to decide and define the beginning (Lower Anchor) and ending (Upper Anchor) ends points of a LP. At issue, are the competing perspectives for the psychological, philosophical, and pedagogical frameworks that are adopted; e.g., cognitive vs socio-cultural learning theory, realist vs instrumental philosophical views of science, group vs individualized teaching routines.

With respect to the temporal development of spatial thinking, there is a lack of consensus among developmental psychology researchers; especially for young pre-K and elementary age students. There are two competing schools of thought among nativist and constructivist researchers. “There is a notable debated about the capabilities of these very young children that is significant to consider in learning progressions research.” (Mohan et al. 2014, p 13). (See Instructional pathways – early childhood learning below).

Nativist researchers believe that humans are born with innate mental modules for spatial thinking and with little guidance from adults can perform spatial tasks. Children as young as three can use some of the spatial properties of room maps. Constructivist researchers maintain that full mastery of spatial thinking cannot occur until later in life. A conceptual understanding of maps depends on substantial learning and experience. The issues raised here apply to all domains for early childhood learning and LP design.

Reasoning with evidence

More LP research is needed on teaching and learning about the nature of scientific evidence and the links to epistemic cognition (Greene, Sandoval, & Bråten, 2016). Here, too, there are competing and complementary perspectives for designing LP learning strategies for obtaining and using evidence. Consider three examples:

  1. (1)

    McNeill and Berland (2017) identify three ‘using evidence’ teaching practice problems (P) and three design heuristics (H) as solutions:

    • P1 Teaching science knowledge as final form ideas and not as evolving evidence-based models and theories. H1 Use evidence that is phenomenon based.

    • P2 Viewing data as factual information. H2 focus on the Evidence – Explanation continuum (Duschl, 2003):

      • ◦ (i) include decision steps for determining the evidence: questions to measures to data to evidence and

      • ◦ (ii) include decision steps for deploying the evidence: evidence to patterns to models to explanations.

    • P3 learning discrete final form ideas. H3 point to evidence used in argumentation discourse

  2. (2)

    Duncan, Chinn and Barzilai (2018) introduce the ‘Grasp of Evidence’ framework steps – Analysis, Evaluation, Interpretation, Integration – to instructional sequences to distinguish between expert’s and layperson’s use of evidence. These are applied to epistemic components of the AIR model: Epistemic Aims & Values, Epistemic Ideals, and Reliable epistemic processes.

  3. (3)

    Krist, Schwarz, and Reiser (2019) advance essential epistemic heuristics for guiding mechanistic reasoning Thinking across scalar levels; Identifying and unpacking relevant factors; Linking to coordinated relationships over time and/or space. The heuristics help to characterize and compare mechanistic reasoning across science content areas and thus have the potential to inform development of mechanisms over time.

Implications for LP research

The research on epistemic practices shares the challenge of defining and measuring the mid-point development of progress regarding scientific practices. The three ‘Teaching Problems’, the ‘Grasp of Evidence’ dynamics, and the three ‘Mechanistic Reasoning’ guiding heuristics are sound examples of LP research programs that take up important interdisciplinary questions of teaching and learning across the grade span. More research on disciplinary, interdisciplinary and transdisciplinary science practices is needed. Consider the following assessment research targets from the NSF Footprint Conference:

  • ▪ Conducting research on examining students’ progress over extended and varied time periods (weeks, months, years)

  • ▪ Developing expertise with using LPs to gauge progress and diagnose difficulties.

  • ▪ Using LP to develop classroom assessments and

  • ▪ Using LP to evaluate high/low quality instructional interventions.

The research tells us that developing rich, conceptual knowledge takes time and requires instructional support employing sound assessment practices. Importantly, the content of a LP is more than core conceptual knowledge but also epistemic and social practices that characterize a domain of science or mathematics. The LP approach to the design and alignment of curriculum, instruction and assessment is grounded in core knowledge theories of cognitive development and learning. The emerging notion is design a LP around the most generative and core disciplinary ideas/practices. Additionally, the core ideas should be accessible to students in beginning school grades and have the potential for sustained exploration across several grades (e.g., K-8). Critically important is the need to research instructional interventions that advance learning.

Instructional pathways – early childhood learning

The extensive research on infants and young children’s cognitive development underscores the variety of knowledge resources and reasoning capabilities children bring to formal/informal education. Young learners are anything but empty minds. Contemporary research while maintaining the notion of developmental pathways, rejects age-based stages of development; e.g., concrete to formal. Young children are capable of noticing patterns and attributes in the natural world, linking the patterns and attributes to science concepts, to developing explanations of natural phenomena, and to reason about abstract ideas in meaningful and productive ways (National Research Council, 2007, 2008). As such, LPs can begin at early grade levels.

Learning research reveals that developing rich, conceptual knowledge takes time and requires instructional support employing sound assessment practices. The content of LPs needs to be more than attaining core conceptual knowledge learning goals. LP designs must also take up important epistemic reasoning practices (e.g., building and refining knowledge claims) and social practices (critiquing and communicating claims) that characterize a disciplinary domain of science or mathematics. Reasoning in chemistry, physics, biology, or the Earth/Space sciences while sharing some similar domain general practices (i.e., hypothesis testing, physics of matter and engery, particulate nature of matter) involve very different types and forms of evidence, measures, scales, models, and mechanisms. As such, LPs are frequently situated in domain specific contexts that couple together using concepts within practices contexts.

The PISA (OECD, 2018) and National Research Council (2012) frameworks distinguish between conceptual, procedural, and epistemic learning goals. The LP/LT approach to the design and alignment of curriculum, instruction and assessment is grounded in theories of cognitive development and learning that focus on core knowledge, not cognitive levels (Bloom’s Taxonomy) nor developmental stages (Piaget’s concrete/formal stages.) The emerging perspective is for LPs/LTs to be built around learner’s experiences, intuitions, and prior knowledge. The focus is determining the most generative and core ideas/practices that are central to the discipline and that support students’ learning. Additionally, the selection of core ideas should be accessible to students in beginning school grades and have the potential for sustained exploration across several grades (e.g., K-8). Critically important are instructional interventions that deepen and broaden learning.

Implications for LP research

The recommendation (National Research Council, 2007, 2008, 2012) is that contexts for experiencing science and introducing new disciplinary core ideas and practices should be accessible to learners and have the potential for sustained generative exploration across subsequent grades. One current area of interest for LP researchers is studying the impact culturally relevant and place-based learning contexts have on learning. The basic research issue is determining how learning occurs, for whom, and under what conditions. The ‘one size fits all’ assessment models are being challenged (Penuel & Shepard, 2017).

Characterizing and determining accessible and generative contexts needs further attention by researchers. Research shows that young children ages 3–4 are, in select domains, capable of sophisticated reasoning (NRC, 2007); as was shown above for spatial reasoning. It also follows, then that if learning environments do not continue to present science as a theory-building or model-building enterprise with specialized ways of talking, writing, and representing ideas, then these innate abilities of children may fade away (Gopnik, 1996). Metz (2004, 2008) cogently argues that we need a rethinking of ‘developmentally appropriate’ when adopting a learning progression perspective, young children are more capable than we think. We must ask then, what are the robust generative topics/skills and when are they accessible to learners?

Longitudinal studies of teaching and learning across grade bands (for the US; preK-2, 3–5, 6–8, 9–12) are needed (See Upper/lower anchors for measuring progress section below). We need to ask, how does reasoning emerged and develop? Schauble (2008) offers some advice that while we certainly want research on young children to answer the question ‘Where does reasoning and learning come from?’; she states we must also ask ‘Where is reasoning going?’ and, ‘What conditions support productive change?’

“Answers to the first question help us better understand the foundation on which further development can build. Answers to the second provide a sense of developmental trajectory, or more likely, trajectories. What characteristic changes are coming up? What pathways of change are usually observed? And answers to the third question focus on how those changes can get supported in a productive way.” (p. 51).

Criteria for what’s accessible to learners dictates when – grade/age level - an LP should begin. Researchers need to establish empirically the foundational platforms or lower anchors from which the generative ideas and practices obtain. More research is needed on establishing criteria to determine age appropriate developmentally appropriate progressions. Longitudinal studies are needed as well as conducting research on the roles and distinctions between domain general and domain specific reasoning (Fisher, Chinn, Engelmann, & Osborne, 2018).

Teaching experiments – science and mathematics

Teaching experiments (TE) is a methodological framework that has contributed to our thinking about the design of instruction and pathways of learning. There are two TE research programs, one by mathematics education researchers and another by science education researchers. The two research approaches have differing perspectives about informing learning pathways.

In mathematics education, the TE goal is the design and validate of Learning Trajectories (LT) the inform instructional practices for teaching pathways. The beginning thinking about LTs is attributed to two sources. One is the ‘Realistic Mathematics Education (RME) Group’ from the Netherlands Freudenthal Institute (Gravemeijer, 1994) that advanced an agenda of developmental progression learning experiments that conjectures possible learning routes for significant mathematical ideas. The RME approach studies students’ solution strategies to instructional tasks. The goal is establishing guidelines for the sequence of instructional tasks that promote participation in targeted types of thinking, reasoning, and learning.

The second contributor is Simon (1995) who proposed a constructivist model of pedagogical thinking for LT. According to Simon a LT is comprised of ‘the learning goal, the learning activities, and the thinking and learning in which the students might engage” (p. 133). The focus of the research here as well is the development of tasks that are connected to students’ thinking and learning; the ‘learning model’ (Clements & Sarama, 2004). The learning model is tested and extended with TE that examine tasks and teacher interactions that elicit students’ thinking and reasoning. Such models which guide LT designs, may be grounded in the historical development of mathematics, observations of informal solution strategies or the emergent mathematical practices of student groups (Cobb, McClain, & Gravemeijer, 2003).

The TE framework in science education was influenced by Didactiks and Conceptual Change research. The Model of Educational Reconstruction (MER) by Duit, Gropengieβer, and Kattman (2005) uses three strands of research to design instruction formats: 1) Investigations into students’ perspectives, 2) Clarification and analysis of subject matter content, 3) Design of learning environments. The corner stone of the MER research program is the ‘Teaching Experiment’, an interview-type method that seeks to understand how individuals coordinate core conceptual understandings in domain-specific contexts (e.g., evolution, ecology, adaptation, cellular functions, among others).

In a teaching experiment, students are presented demonstrations of or engagements with phenomenon and asked to employ think aloud protocols while reasoning through the task. Von Aufschnaiter and Rogge (2010) and Von Aufschnaiter, Erduran, Osborne, and Simon (2008) employs a two phase, six-step process when conducting TE. The research phase three steps - Description, Intuitions, and Rules - identify students’ missing conceptions or misconceptions. The development phase three steps – Explicit Rule-Based, Intuitive Rule-Based, Exploration – develops students’ conceptual reasoning. Students in dyads or triads are presented with situations/scenarios typically presented in the form of demonstrations of phenomenon and asked to employ think aloud protocols to reason through the task.

Implications for LP research

LT are the basis for sequences of activities and inform how to help avoid the fragmentation and short curriculum strands common in textbooks. The goal, which is shared with science LPs, is to develop coherent instructional sequences that are justified by both theoretical deliberations and empirical data. But the mathematics education researchers’ ‘learning models’ TE outcome which target thinking and reasoning differs in interesting and important ways to that of the science education research goals of address missing and mis – conceptions. Research recommendations from the NSF Footprint Conference regarding instruction are:

  • ▪ Studies of Professional Development models on how to use LP frameworks are emerging with increasing adoptions and adaptations of the US Next Generation Science Standards ‘Three Dimensional’ (Disciplinary Core Ideas, Science & Engineering Practices; Crosscutting Concepts) teaching and learning frameworks. Mechanisms for sharing LP findings to curriculum developers need to be established.

  • ▪ Developing teachers’ capacities to recognize and nurture students’ forms of thinking. Adapting instruction based on contingent pedagogical responses is another important domain for LP R&D. Professional Learning Community, Japanese Lesson Study, and Network Improvement Community models are promising approaches for engaging groups of teachers.

  • ▪ Navigating diversity of student experiences and connections to LPs.

Upper/lower anchors for measuring progress

Adopting LP formats requires measurements of what precisely is progressing. The articulation of boundaries and levels in the form of lower and upper anchors is a starting point for researchers. The concerns about multiple learning pathways (e.g., messy middle, stepping stones) are important. Research needs to carefully examine such pathways, and the junctures within, in order to provide teachers with ‘sign posts’ for guiding learning and to enable LP designers to generate assessments for ‘making thinking visible’.

A good example is the study by Morell et al. (2017) who report on a ‘Construct Modeling’ approach for testing the design a LP on structure of matter. Four core construct maps - Macro Properties, Changes of State and Physical Changes, Particulate Explanations of Physical Change, Particulate Explanations of Chemical Change - were identified as core or central for learning structure of matter along with two auxiliary constructs - Measurement and Data Handling, Density, Mass & Volume. Employing rigorous methods and analyses to conduct ‘construct modeling approach’, they discovered that students’ thinking was more complicated than hypothesized revealing sub-structures to the core constructs. The recommendation is “the choice of instructional approach needs to be fashioned in terms of a model. . .of the paths through which learning might best proceed” (p. 1045).

The research challenges here, as with all LP designs adopting the lower/upper anchor levels framework, is ascertaining and measuring the intermediate mid-point understandings of core constructs and practices which are referred to as ‘Progress Variables’ (Wilson, 2009). For example, some of the spatial progress variables used in the Maps and Geospatial Technology LP include Symbols, Scales, Perspective Taking, Distance and Direction, Identity and Location and are described across ages 3–6, 7–9, and 10–12. One important issue for Mohan, et al. (2014) arising from LP work, is the current emphasis on understanding the development of ideas and the lack of attention to the development of scientific practices such as Mapmaking, Map Reading and Navigation, and Using geospatial technologies like GIS. Confrey (2019) in a synthesis of LT research, found that LT research over the last decade has concentrated on five topics: Number, Measurement, Geometry, Algebra & Functions, and Probability & Statistics.

There is a need to focus more LP research around topics receiving less attention but are nonetheless prominent in State and National standards and high-stakes exams. Topics need to go beyond conceptual learning goals and attend to science and mathematics practices as well as cognitive and epistemic learning goals. Another research domain that needs more attention is studying how Curriculum, Instruction, Assessment are incorporated in LP designs. The emergence of integrated STEM education (Cunningham & Kelly, 2017) has complicated matters regarding curriculum designs. New lines of research are needed to examine conceptual and social learning goals with an emphasis on potential ‘clashes’ within and between the numerous STEM disciplines epistemic practices. Another arena for additional LP research is considering the impact LP design has on Equity & Diversity. Culturally relevant curriculum contexts and formative assessment formats are vitally important for engaging and motivating beginning and adolescent learners (Penuel & Shepard, 2017).

Concepts & practices

The growth of scientific knowledge and the growth of children’s scientific knowledge both involve processes and mechanism about seeing nature in new ways. One approach to conceptual change research adopts cognitive schema theory to explain conceptual change processes. The suggestion is that science learning is similar to scientific theory change. Learning is fundamentally theory like and seen as an individual process of cognitive theory revision. On this view, children’s new insights are explained by processes of appropriating a new set of theoretical lenses or mental models that can change the way a child understands a domain and develops new broader contexts of meaning. With this view of conceptual change, teaching sequences are constructed to teach to the misconception by providing experiences (evidence) that challenge learners’ extant incomplete mental models leading to cognitive dissonance, new insights, and conceptual change. The goal is to fix what is wrong in the child’s thinking, correct the mistakes.

However, the translation of this view to the classroom has led to uniform instructional conceptual change teaching practices that jump too quickly to the established scientific explanation. Consider the following excerpt from Ready, Set, Science!

Many teachers have their students do experiments or make observations with the hope that scientific understanding will miraculously emerge from the data. Being exposed to new information, however, is not the same as understanding or integrating that information into what one already knows. Real conceptual change requires that deeper reorganization of knowledge occur. (National Research Council, 2008, p. 41).

Driver et al. (1994) echo the same sentiment, “New knowledge is the result not only of the broadening in use of existing conceptions or the addition of new notions. It also involves the reorganisation of conceptual schemes themselves.” (p. 89). Duschl et al. (2011) found that its often quite evident when the researcher(s) is working from one or another conceptual change framework – the misconception-based fix it view or the intuition-based work with it view. Adoption of one or the other view also impacts the inclusion of epistemic learning goals, typically ignored in the fix it view but more frequently considered in the work with it view. Developing epistemic criteria and evaluating the epistemic status of ideas are viewed as necessary elements in a conceptual ecology of science learning environments that seek to promote enculturation into scientific cultures and/or achieve NOS learning goals (Duschl & Grandy, 2013; Greene et al., 2016)

Implications for LP research

In summary, having conceptual change as a learning goal is not seeing science as seeking justified true beliefs but rather as practices pursuing rational beliefs and explanatory coherence that are influenced and shaped by new tools, instruments, theories, and methods. The strong recommendation from Taking Science to School is the teaching of conceptual knowledge should not be independent of learning science practices. We recognize and appreciate today, where previous efforts have that the need to view science as a set of processes that involve (1) logical reasoning about evidence, (2) theory change, and (3) participation in the culture of scientific practices. The hypotheses testing experimentation practices of science are a critical component of what it means to do science. But such practices are conducted in service to other equally important dynamic elements for doing science (NRC, 2007, 2008):

  • Building theories and models.

  • Collecting and analyzing data from observations or experiments.

  • Constructing arguments.

  • Using specialized ways to talking, writing, and representing phenomenon.

Conclusions

Learning Progressions and developmental pathways are relatively new research domains. New theoretical frameworks in psychology, philosophy, and pedagogy, along with new technological platforms for aiding the delivery of instruction and the analysis of learning (e.g., evidence-centered design performance assessments) are shifting thinking for framing learning goals and designing science learning environments. There are numerous challenges and opportunities for researchers, curriculum planners, and teachers in the quest to design extended coherent sequences of teaching and learning. The shift away from teaching what we know to instructional formats that focuses on reasoning and how we know and why be believe it, reflects a change of learning goals that seek competence with scientific and epistemic practices. Taking a scientific reasoning and a ‘science as practice’ view applied to disciplinary and interdisciplinary contexts, and increasingly, to transdisciplinary contexts that merge natural sciences, social sciences, and the humanities, is becoming the new normal.

One significant challenge, or perhaps conversation among researchers, is the selection, adoption, and implementation of teaching and learning frameworks or guiding conceptions that cohere across longer spans of time. Research on early childhood learning indicates that foundational knowledge and practices can begin early on, ages 3–6. However, tensions abound with respect to the various learning frameworks used to design instruction; e.g., information processing, socio-cultural, cognitive, behavioral, social/emotional among others. Continued research on domain-general and domain-specific learning will be critically important for understanding approaches to ‘integrated STEM’ in both formal and informal out-of-school contexts. So, too, will research on epistemic cognition inasmuch as different disciplinary domains - sciences, mathematics, engineering - employ different epistemologies for building knowledge. If left uncoordinated, then epistemic mash-ups are inevitable and may confuse rather than enlighten learners.

Another contentious conversation among learning scientists is the debate among information processing psychologists (Kirschiner, Sweller & Clark, 200) and constructivist psychologists (Hmelo-Silver, Duncan, & Chinn, 2007) on the topic of cognitive or mental load and the scaffolding of problem-based and inquiry learning. Yet, another challenge is using LP research to inform the design of standards and large-scale assessments. Currently, there are gaps in the LP research with respect to curriculum topics and STEM practices that are being tested. And finally, LP research in science education will be impacted by the ongoing debates regarding explicitly teaching the nature of science (Abd-El Khalick, 2012; Duschl & Grandy, 2013)

All the LP challenges, however, will pale in comparison to challenges with reforming our educational systems to embrace LP formats. There are systemic issues with regard to national, state, and local curriculum and instruction guidelines; STEM teacher preparation and inservice education programs, curriculum and assessment redesign, as well as the mission of STEM teacher and leadership professional associations and government agencies.

Availability of data and materials

Data sharing is not applicable to this article as no datasets were generated or analysed during the current study.

References

  • Abd-El Khalick, F. (2012). Examining the sources for our understandings about science: Enduring conflations and critical issues in research on nature of science in science education. International Journal of Science Education, 34(3), 353–374.

    Article  Google Scholar 

  • Bransford, J., Barron, B., Pea, R., Meltzoff, A., Kuhl, P., Bell, P., … Sabelli, N. (2006). Foundations and opportunities for an interdisciplinary science of learning. In K. Sawyer (Ed.), The Cambridge handbook of the learning sciences, (pp. 19–34). New York: Cambridge University Press.

    Google Scholar 

  • Brown, A. (1997). Transforming schools in to communities of thinking and learning about serious matters. American Psychologist, 52, 399–413.

    Article  Google Scholar 

  • Clements, D. H., & Sarama, J. (2004). Learning trajectories in mathematics education. Mathematical Thinking and Learning, 6, 81–89.

    Article  Google Scholar 

  • Cobb, P., Confrey, J., diSessa, A., Lehrer, R., Schauble, L. (2003). Design experiments in educational research. Educational Researcher, 32(1), 9–13.

    Article  Google Scholar 

  • Cobb, P., McClain, K., & Gravemeijer, K. (2003). Learning about statistical covariation. Cognition and Instruction, 21(1), 1–78.

    Article  Google Scholar 

  • Confrey, J. (2019). Future of education and skills 2030: Curriculum analysis – A synthesis of research on learning trajectories/progressions in mathematics. Paris: OECD Directorate for Education and Skills, Education Policy Committee.

  • Corcoran, T., Mosher, F. A., Rogat, A., & Consortium for Policy Research in Education (2009). Learning progressions in science: an evidence-based approach to reform. (Consortium for Policy Research in Education report #RR-63). Philadelphia: Consortium for Policy Research in Education.

    Google Scholar 

  • Cunningham, C. M., & Kelly, G. J. (2017). Epistemic practices of engineering for education. Science Education, 101, 486–505.

    Article  Google Scholar 

  • Daro, P., Mosher, F. A., Corcoran, T., & Consortium for Policy Research in Education (2011). Learning trajectories in mathematics: a foundation for standards, curriculum, assessment, and instruction. (Consortium for Policy Research in Education report #RR-68). Philadelphia: Consortium for Policy Research in Education.

    Google Scholar 

  • Driver, R., Leach, J., Scott, P., & Wood-Robinson, C. (1994). Young people’s understanding of science concepts: Implications of cross-age studies for curriculum planning. Studies in Science Education, 24, 75–100.

    Article  Google Scholar 

  • Duit, R., Gropengieβer, H., & Kattman, U. (2005). Towards science education research that is relevant for improving practice: The model of educational reconstruction. In H. Fisher (Ed.), Developing standards in research on science education: The ESERA Summer School 2004. New York: Taylor & Francis.

    Google Scholar 

  • Duncan, R., Chinn CA., Barzilai S. (2018). Grasp of evidence: Problematizing and expanding the next gerneration science standards' conceptualization of evidence. Journal of Researh in Science Teaching, 55, 907–937.

    Article  Google Scholar 

  • Duncan, R. G., Rogat, A., & Yarden, A. (2009). A learning progression for deepening students’ understanding of modern genetics across the 5th-12th grades. Special issue on Learning Progressions for the Journal of Research in Science Teaching, 46(6), 644–674.

    Google Scholar 

  • Duschl, R. (1990). Restructuring science education: The importance of theories and their development. New York: Teacher’s College Press.

    Google Scholar 

  • Duschl, R. (2003). Assessment of inquiry. In J. M. Atkin, & J. Coffey (Eds.), Everyday assessment in science classrooms. Washington, DC: NSTA.

    Google Scholar 

  • Duschl, R., & Gitomer, D. (1991). Epistemological perspectives on conceptual change: Implications for educational practice. Journal of Research in Science Teaching, 28(9), 839–858.

    Article  Google Scholar 

  • Duschl, R., & Grandy, R. (2013). Two views about explicitly teaching nature of science. Science & Education, 22(9), 2109–2140.

    Article  Google Scholar 

  • Duschl, R., Maeng, S., & Sezen, A. (2011). Learning progressions and teaching sequences: A review and analysis. Studies in Science Education, 47(2), 119–177.

    Article  Google Scholar 

  • Fisher, F., Chinn, C., Engelmann, K., & Osborne, J. (Eds.) (2018). Scientific reasoning and argumentation: The roles of domain-specific and domain-general knowledge. New York: Routledge Taylor Francis Group.

    Google Scholar 

  • Giere, R. (1988). Explaining science: A cognitive approach. Chicago: University of Chicago Press.

    Book  Google Scholar 

  • Gopnik, A. (1996). The scientist as child. Philosophy of Science, 63, 485–514.

    Article  Google Scholar 

  • Gravemeijer, K. P. E. (1994). Developing realistic mathematics instruction. Utrecht: Freundenthal Institute.

    Google Scholar 

  • Greene, J., Sandoval, W., & Bråten, I. (Eds.) (2016). Handbook of epistemic cognition. New York: Routledge, Taylor & Francis Group.

    Google Scholar 

  • Hmelo-Silver, C., Duncan, R., & Chinn, C. (2007). Scaffolding and achievement in problem-based and inquiry learning: A response to Kirschner, Sweller, and Clark (2006). Educational Psychologist, 42(2), 99–107.

    Article  Google Scholar 

  • Huynh, N. & Gotwals, A. (2014). What are learning progressions? In Solem, M., N. T. Huynh, and R. Boehm, eds. Learning Progressions for Maps, Geospatial Technology, and Spatial Thinking: A Research Handbook. (pp 1-8). Washington, DC: Association of American Geographers.

  • Knorr-Cetina, K. (1999). Epistemic cultures: How science makes knowledge. Cambridge: Harvard University Press.

    Google Scholar 

  • Krajcik, J., Blumenfeld, P., Marx, R., & Soloway, E. (1994). A collaborative model for helping teachers learning project-based instruction. Elementary School Journal, 94(5), 483–498.

    Article  Google Scholar 

  • Krist, C., Schwarz, C. V., & Reiser, B. J. (2019). Identifying essential epistemic heuristics for guiding mechanistic reasoning in science learning. Journal of the Learning Sciences, 28(2), 160–205.

    Article  Google Scholar 

  • Kuhn, T. (1970). The structure of scientific revolutions, (2nd ed., ). Chicago: University of Chicago Press.

    Google Scholar 

  • Lehrer, R., & Schauble, L. (2012). Seeding evolutionary thinking by engaging children in modeling its foundations. Science Education, 96(4), 701–724.

    Article  Google Scholar 

  • Lehrer, R., & Schauble, L. (2015). Learning progressions: The whole world in NOT a stage. Science Education, 99(3), 432–437.

    Article  Google Scholar 

  • Linn, M. C. (1995). Designing computer learning environments for engineering and computer science: The scaffolded knowledge integration framework. Journal of Science Education and Technology, 4(2), 103–126.

    Article  Google Scholar 

  • McNeill, K. L., & Berland, L. (2017). What is (or should be) scientific evidence use in k-12 classrooms? Journal of Research in Science Teaching, 54(5), 672–689.

    Article  Google Scholar 

  • Metz, K. (2004). Children’s understanding of scientific inquiry: Their conceptualization of uncertainty in investigations of their own design. Cognition and Instruction, 22, 219–290.

    Article  Google Scholar 

  • Metz, K. (2008). Narrowing the gulf between the practices of science and the elementary school classroom. Elementary School Journal, 109(2), 138–161.

    Article  Google Scholar 

  • Mohan, L., Mohan, A., & Utall, D. (2014). Research on thinking and learning with maps and geospatial technologies. In Solem, M., N. T. Huynh, and R. Boehm, eds. Learning Progressions for Maps, Geospatial Technology, and Spatial Thinking: A Research Handbook. (pp 1-8). Washington, DC: Association of American Geographers.

  • Morell, L., Collier, T., Black, P., & Wilson, M. (2017). A construct-modeling approach to develop a learning progression of how students understand the structure of matter. Journal of Research in Science Teaching, 54(8), 1024–1048.

    Article  Google Scholar 

  • National Research Council (1999). How people learn. In J. Bransford, A. Brown, & R. Cocking (Eds.), Center for Education, Division of Behaviorial and Social Sciences. Washington, DC: The National Academy Press.

    Google Scholar 

  • National Research Council (2001). In J. W. Pellegrino, N. Chudowsky, & R. Glaser (Eds.), Knowing what students know: The science and design of educational assessment. Washington, DC: National Academy Press.

    Google Scholar 

  • National Research Council (2006). In M. R. Wilson, & M. W. Bertenthal (Eds.), Systems for state science assessment. Washington, DC: National Academy Press.

    Google Scholar 

  • National Research Council (2007). In R. A. Duschl, H. A. Schweingruber, & A. W. Shouse (Eds.), Taking science to school: Learning and teaching science in grades K–8. Washington, DC: The National Academies Press.

    Google Scholar 

  • National Research Council (2008). In S. Michaels, A. W. Shouse, & H. A. Schweingruber (Eds.), Ready, set, Science! Washington, DC: The National Academies Press.

    Google Scholar 

  • National Research Council (2012). A framework for K-12 science education: Practices, crosscutting concepts, and core ideas. Washington, DC: The National Academies Press.

    Google Scholar 

  • National Science Foundation (2012). Learning progressions footprint conference. Co-authors: A. Calabrese-Barton, J. Confrey, W. Penuel & L. Schauble. Washington DC: Directorate for Education and Human Resources.

    Google Scholar 

  • OECD (2018). The future of education and skills 2030 project. Paris: Organisation for Economic Co-operation and Development (OECD).

    Google Scholar 

  • Osborne, J. (2018). Styles of scientific reasoning: What can we learn from looking at the product, not the process, of scientific reasoning? In Fisher et al. (Eds.), Scientific reasoning and argumentation: The roles of domain-specific and domain-general knowledge, (pp. 162–186). New York: Routledge Taylor Francis Group.

    Google Scholar 

  • Penuel, W. R., & Shepard, L. A. (2017). Assessment and teaching. In D. H. Gitomer, & C. A. Bell (Eds.), Handbook of research on teaching. Washington, DC: AERA.

    Google Scholar 

  • Rudolph, J. (2019). How we teach science: What’s changed, and why it matters. Cambridge: Harvard Education Press.

    Book  Google Scholar 

  • Schauble, L. (2008). Commentary: Three questions about development. In R. Duschl, & R. Grandy (Eds.), Teaching scientific inquiry: Recommendations for research and implementation, (pp. 50–56). Rotterdam: Sense Publishers.

    Chapter  Google Scholar 

  • Schwab, J. (1962). The teaching of science as inquiry. In J. Schwab, & P. Brandwein (Eds.), The teaching of science, (pp. 1–104). Cambridge: Harvard University Press.

    Google Scholar 

  • Simon, M. (1995). Reconstructing mathematics pedagogy from a constructivist perspective. Journal for Research in Mathematics Education, 26, 114–145.

    Article  Google Scholar 

  • Solem, M., Huynh, T., & Boehm, R. (2014). Geoprogressions – Learning progressions for maps, geospatial technology and spatial thinking: A research handbook. Washington, DC: Association of American Geographers.

    Google Scholar 

  • Von Aufschnaiter, C., Erduran, S., Osborne, J., & Simon, S. (2008). Arguing to learn and learning to argue: Case studies of how students’ argumentation relates to their scientific knowledge. Journal of Research in Science Teaching, 45, 101–131.

    Article  Google Scholar 

  • Von Aufschnaiter, C., & Rogge, C. (2010). Misconceptions or missing conceptions? Eurasia Journal of Mathematics, Science & Technology Education, 6, 3–18.

    Article  Google Scholar 

  • Wilson, M. (2009). Measuring progressions: Assessment structures underlying a learning progression. Journal of Research in Science Teaching, 46, 716–730.

    Article  Google Scholar 

Download references

Acknowledgements

None.

Funding

None.

Author information

Authors and Affiliations

Authors

Contributions

RAD is the sole author and contributor to the article. The author read and approved the final manuscript.

Corresponding author

Correspondence to Richard A. Duschl.

Ethics declarations

Competing interests

The author declares that he has no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Duschl, R.A. Learning progressions: framing and designing coherent sequences for STEM education. Discip Interdscip Sci Educ Res 1, 4 (2019). https://doi.org/10.1186/s43031-019-0005-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43031-019-0005-x

Keywords