Skip to main content

Graduate teaching assistants: sharing epistemic agency with non-science majors in the biology laboratory

Abstract

In teaching laboratories, scientific reasoning and argumentation are often taught in concert so students are provided opportunities to formulate a more nuanced understanding of science-as-practice and science as a social epistemology. Given recent calls to attend to the social aspects of science, we used Critical Contextual Empiricism, a social epistemology of science, as a framework for examining what features of a scientific community emerge in the introductory biology lab. In a case study of six graduate teaching assistants (GTAs), we explored how GTAs encouraged epistemic agency that encouraged their student’s efforts at knowledge construction in a community by collecting multiple data sources (e.g. audio recordings, students written work, focus group interviews) over a four-week sequence. Data analysis strategies were inductive, as a series of initial and focused coding were applied to select exchanges garnered from within the lab. Comparative analysis identified common occurrences across each respective case, which then revealed three overarching themes. We intended for GTAs to readily encourage epistemic agency to their students so insights regarding the social nature of knowledge production could be experienced and discussed. When epistemic shifts did occur, GTAs executed discursive moves targeting students’ experimental design practices (e.g. defining the dependent variable). Conversely, student’s efforts were also de-legitimized as GTAs provided specific directives to follow when challenges emerged for students. Finally, GTAs struggled to create a genuine community that modeled exemplary science-as-practice in the lab. Implications discuss how GTAs likely require more targeted support if community-driven learning is going to be successful in these uniquely challenging settings. Finally, working with non-science majors adds an additional layer of importance here given these lab-based experiences are limited and understanding the community’s role in generating scientific knowledge is a key component of being scientifically literate.

Introduction

Science education reform efforts have long targeted undergraduate science courses as a setting to promote deeper student understanding of nature of science (National Academies of Sciences, Engineering & Medicine, 2018) and yet, relative to lecture venues, very little discipline-based education research has been conducted in college laboratory settings despite the prevalence of labs in most college science curricula (National Research Council, 2012). In teaching laboratories, scientific reasoning and argumentation are taught in concert so students can formulate a more nuanced understanding of science-as-practice (Stroupe, 2014), which enables communities of learners to practice constructing scientific knowledge themselves; much like practicing scientists (Manz, 2015).

Responding to the call to integrate social epistemologies into the science classroom (Allchin, 1999; Allchin, 2014; Duschl & Osborne, 2002), we adopted Critical Contextual Empiricism (CCE; Borgerson, 2011; Longino, 2002) as a framework for examining interactions in a laboratory setting wherein community-based knowledge production was expected. We utilized CCE because as a social epistemology of science it highlights the influence of community composition in knowledge production and is thus, related to the call to make science education more inclusive (Barton & Osborne, 1998); an important component of scientific literacy (Allchin, 2014; Brickhouse, 1994; Kelly, 2014).

In studying how students might better understand knowledge production in the sciences, we must explore how teachers conceptualize scientific knowledge with their students; an exploration that will undoubtedly focus on discourse and discursive moves (Duschl and Osborne 2002). Grinath and Southerland (2019) have recently highlighted the “nuances of epistemic talk” (p. 116) in the undergraduate biology lab; suggesting instructors’ capacity to successfully elicit student thinking supported subsequent explanatory rigor in student responses. Continued research that explores how GTAs co-construct knowledge with their students while engaged in scientific inquiry and argumentation is justified and warranted (Duschl and Grandy 2008; Wyse et al. 2014). The main reasoning being most studies of teacher epistemic understandings have been conducted in K-12 settings and focus on teacher beliefs about student learning (e.g. Muis and Foy 2010; Thompson et al. 2016; Yerdelen-Damar and Eryılmaz 2019). The current study expands on this work by examining how Graduate Teaching Assistants [GTAs] teach social aspects of science in a post-secondary, non-science majors (hereafter, non-majors) introductory teaching laboratory.

Challenges arising in non-majors labs

Examining how GTAs communicate an understanding of science while teaching non-majors lab classes is especially challenging within large, research-focused universities. Introductory biology courses for non-majors should prepare scientifically literate citizens that act as “competent outsiders” capable of making decisions about science in society (Feinstein et al. 2013). For most non-majors, a single science course is often required; making it critical their experience includes one that teaches science as a way of knowing (Grinath and Southerland 2019; Richardson 2013).

In most research-focused universities in the United States [U.S.] and abroad, GTAs commonly teach introductory biology laboratories that target discipline-specific content and the processes of science (Gardner and Jones 2011; Rushin et al. 1997; Sundberg et al. 2005). Unfortunately, these individuals seldom have adequate preparation (Luft et al. 2004; Roehrig et al. 2003), and lab instruction is often their first teaching experience. Furthermore, GTAs may not have the skills necessary to teach scientific inquiry in a laboratory context and they may also hold naïve or inaccurate conceptions of the philosophical underpinnings of science (Aydeniz and Bilican 2014; Ferzli et al. 2012; Honeycutt et al. 2010; Park 2004; Wyse et al. 2014). These novice instructors have been found to hold low expectations of their students, while also failing to capitalize on the value of their own research experiences in the lab when working with students (Gardner and Parrish 2019; Schussler et al. 2015). Institutional challenges also present barriers, as upwards of 20 sections “run” each semester with 30+ students per section, often sharing the same resources and lab space. When viewed holistically, all of these challenges present potentially disparaging barriers to effectively convey an understanding of science in non-majors lab classes.

Epistemologies and the classroom

Prior studies of knowledge construction in the laboratory focus on individuals’ capacity to grapple with knowing rather than community production of knowledge. Previous studies have focused on practical epistemologies, as expressed during inquiry-based exercises (Sandoval 2005; Wickman 2004), theories of epistemological development (Sandoval 2014), personal epistemologies (Havdala and Ashkenazi 2007) and the development or authenticity of epistemic agency (Miller et al. 2018). Personal epistemologies focus on individual’s understanding of what knowledge is (Hofer and Pintrich 1997; Perry 1970), whereas practical epistemologies, in contrast to formal epistemic beliefs, are manifested in the application of knowledge building (Sandoval 2005). Kelly et al. (2012) have proposed disciplinary perspectives, personal ways of knowing and social practices as three conceptualizations of epistemology in science learning; although, these perspectives likely overlap. The current study fits within Kelly et al. 2012social-practice perspective because it focuses on community-driven knowledge construction, as opposed to personal or practical ways of knowing; both of which emphasize individuality. In general, further investigation into the complex relationship between instructor epistemic beliefs and science teaching at the university level is needed (Hofer and Pintrich 1997; Maclellan 2015; Mansour 2009).

Theoretical framework

Theories of learning are necessarily related to theories of knowledge thus, the ways in which learning happens and by whom are ultimately epistemological questions (Kelly et al. 2012). Recent reforms and most evidence-based teaching practices are grounded in constructivist theories related to teaching and learning (AAAS 2011). Constructivism, which situates the individual and their experiences as the locus of knowledge creation (cf. Matthews and Matthews 2014), has been widely accepted as an appropriate framework for student-centered learning (National Academies of Sciences, Engineering, and Medicine 2018; Scardamalia and Bereiter 1991). In science however, knowledge is arguably constructed in communities, unbuffered from sociocultural influence (Knorr Cetina 1999).

If one of the goals of scientific literacy is for students to experience “thinking and acting like a scientist”, science educators should consider non-relativistic social epistemologies of science that establish and highlight the community as the locus of knowledge production as participants mimic the work of scientific communities (Brickhouse 2008; Forman et al. 2017). Scientific argumentation therefore lends itself to explicit community-based knowledge construction because it requires students to interact and critique the claims of their peers while they determine what constitutes justifiable and reasonable supporting evidence (Duschl and Grandy 2008; Manz 2015).

Critical contextual empiricism

As a framework for examining how GTAs and students interact to communicate science as a social endeavor, we adopt social-practice epistemology and CCE to ground our work in “science for all” and inclusive pedagogies (Brickhouse 1994). Teaching science as a social endeavor that benefits from diverse and varying viewpoints is foundational to creating an inclusive culture in science classrooms (Grasswick 2010). Here, we argue CCE is especially pertinent to laboratory education given recent calls advocating for biology education to promote both inquiry and inclusivity (Berland et al. 2016; Engle 2006). So, while CCE is often cited and tangentially adopted into “consensus” lists characterizing the nature of science (Abd-El-Khalick et al. 2008; Allchin 2014; Irzik and Nola 2014; Niaz and Maza 2011), there has been limited, explicit infusion of CCE into nature of science (Fuselier et al. 2019) and by our account no studies exploring how it might be operationalized in the laboratory have been conducted.

Critical to understanding how CCE relates to laboratory education is Longino’s (1990, 2002) assertion that the very sociality of the process of knowledge construction ensures objectivity; knowledge is more objective when it accounts for criticisms from multiple viewpoints. Knowledge is inherently social because communities, not individuals, produce it. Longino (2002) outlines four criteria necessary to ensure CCE in knowledge-producing communities: 1) recognized public forums for presentation and critique of work; 2) a community that responds to criticism; 3) recognized public standards belonging to the community; and 4) a tempered equality of intellectual authority that exposes potential knowledge to diverse criticisms. When these criteria are met, new ideas and their underlying assumptions are exposed to substantial criticism and produce knowledge that is more objective as it accounts for critiques from multiple viewpoints. Longino (2002) stresses through her criterion of “tempered equality of authority”, the importance of diverse voices when providing criticisms as new perspectives might reveal varied assumptions that in turn expose biases and strengthen claims. This criterion aims to prevent privileging and/or the exclusion of certain perspectives while also acknowledging how values might influence processes of knowledge production; an effort that inescapably underlines the necessary contributions of expertise from other disciplines and a diversity of people – different classes, races, religions, and ethnicities (Borgerson 2011). This is especially pertinent in the laboratory, where the instructor is typically viewed by students as an epistemic authority (Raviv et al. 2003).

Rationale: CCE and the laboratory

We argue that what should emerge from the introductory laboratory experience is the understanding science is a human enterprise influenced by community composition, expertise, and other social components as well as empiricism, skepticism and related rational components. Thus, CCE is a fruitful framework for examining how instructors might promote an epistemology of science during group work, inter-group collaborations and scientific argumentation. Importantly, using the lens of CCE situates inclusiveness as an integral component to sound science (Muis and Foy 2010). Reimagining lab work in this way should support student’s capacity to experience and understand how the social and rational components of science are integrated as knowledge is generated (Hashweh 1996).

A laboratory that effectively models science, grounded in CCE must therefore (a) provide a relevant context for collaboration and productive student-to-student dialog, (b) include established procedural guidelines that scaffold students into inquiry and argumentation, and (c) adopt cultural norms of responsibility/tolerance and embrace scientific norms of empiricism and skepticism (Duschl and Osborne 2002). In general, the first two items can be achieved via strategic and informed curriculum development, but the final is dependent upon the instructor. Within the contexts of our study, GTAs with limited pedagogical preparation and essentially no formal study of the philosophy of science or epistemology are tasked with this challenge.

Research question

In order to explore the operationalization of CCE within the teaching laboratory, we investigated the instructional moves that GTAs implemented during a four-week, open-inquiry and argumentation experience at the end of a semester. Our guiding research question was: How does instruction about knowledge production in scientific communities emerge in a lab where its structure was designed to promote such teaching? Given the study’s theoretical framework, namely CCE, we hypothesized that implicit or explicit instruction about knowledge production in a scientific community would emerge via GTA’s discourse practices and/or instructional moves; and when employed these actions might promote broader student participation in a mock scientific community.

Research methods

Context

Over two consecutive semesters with different GTAs and students, data was collected in a non-majors, introductory biology laboratory (hereby referred to as Biol 100) as part of a multiple case study (Merriam, 2002; Merriam & Tisdel, 2015). We focused on the last 4 weeks of a semester-long course during which GTAs taught the same laboratory sequence in multiple sections of the class; each section scheduled for 2 hours, once per week, and ranging in size from 25 to 33 students. To promote consistency among lab sections, GTAs met as a group weekly throughout the semester with the second author to review expectations while discussing the “lesson plan” (i.e. instructional resources, presentations) for the forthcoming lab.

Lab preparation and expectations

Pre-lab meetings never lasted longer than an hour and part of the meeting revolved around the logistics of running the lab. During this time, we also overviewed and highlighted each lab’s focus; providing a typed-out copy of these objectives for each GTA to review and ask questions. Depending on the focus of the lab, objectives dealt with varying aspects of the practice(s) of science (e.g. controlling variables, defending a scientific claim). Outside of these four meetings, the second author had prioritized two broader “course objectives” with the participants prior to data collection. First, GTAs had practiced using divergent questioning to encourage student critical thinking, and second, they read and discussed an article discussing the importance of teaching science to non-majors (Feinstein et al., 2013). In general, all participants were aware the course had been structured to give students experience with the methods of science, including observational studies, controlled experiments, scientific writing, scientific argumentation from evidence, and lab skills.

Finally, we did not provide additional training or coursework for GTAs focused on social epistemologies of science for two reasons. First, we wanted this study to mimic what is typical of most GTAs’ teaching experience (i.e., no formal training in epistemology). Second, we believed the lab’s structure, resources, and content could still foster opportunities for the operationalization of CCE in the lab given the relationship between the practices of science and the creation of community-driven knowledge production (Sandoval, 2014; Wickman, 2004).

Laboratory content and sequence

Students in Biol 100 were tasked with collaboratively designing and implementing an investigation, defending their results and writing a lab report that detailed their process and results. The lab was designed to use a commonly implemented strategy for promoting scientific reasoning through argumentation, “claims-evidence-reasoning”, in secondary and early college science classes. This culminating experience in the course had followed other experiences wherein students had practiced utilizing various methods of scientific inquiry; ultimately progressing towards a more open-inquiry involving pill bugs.Footnote 1 Table 1 provides an overview of the scaffolded, four-week sequence of lab exercises, which was designed for a slow “release of responsibility” to the students (Fisher & Frey, 2013). Aggregation behavior (of pill bugs) was selected for this lab because it represents an authentic, complex, open-ended problem in animal behavior (c.f. ; Broly, Devigne, Deneubourg, & Devigne, 2014; Tuf, Drábková, & Šipoš, 2015). Briefly stated, pill bug aggregation has been hypothesized to be related to environmental manipulation (e.g. water availability) or social communication (e.g. chemical signaling).

Table 1 Animal behavior (pill bug) laboratory scope and sequence

As described above, experiences in the lab were designed for students to practice formulating a scientific question, developing/testing hypotheses, and using the evidence generated to defend claims and practice critique of science in a community (as opposed to specifically understanding pill bug behavior). Undoubtedly, content-specific disciplinary reasoning (Krist, Schwarz, & Reiser, 2019) would be needed to successfully engage in many of these skills. To support this necessary content-specific learning, abbreviated summaries of relevant articles (e.g. Tuf et al., 2015) were provided and actively discussed with students. Additionally, feedback was provided to students throughout the process as they submitted written work for critique and feedback at each stage in this scaffolded process.

Participant selection

A total of six GTAs were selected from a pool of 15; all having taught identical sections of Biol 100. Potential participants were initially eliminated if they had previously taught multiple sections of Biol 100 because we wanted to focus on novice instructors, relatively new to teaching this specific lab. Additionally, three potential participants elected not to consent, primarily because they were anxious about being audio-recorded while teaching. We also sought to have a balanced gender ratio, which limited our final selection. Ultimately, we enrolled six GTAs: three women and three men, all native English speakers, none of whom had more than 3 years of teaching experience, and only one who had previously taught Biol 100 once prior (Table 2). We therefore contend that our selection process enabled us to identify suitable participants and thereby uncover “the conditions under which the construct or theory (of interest) operated” (Miles & Huberman, 1994, p. 29), that being the potential emergence of CCE in the lab with fairly novice instructors.

Table 2 Graduate teaching assistant demographics

Laboratory section selection

To limit the breadth of data collected, researchers and GTAs used purposive sampling (Patton, 2002) to identify two target groups of students for in-depth data collection per GTA. GTAs assisted with this process given they had 2 months of prior experience with students. We asked GTAs to consider student groups most likely to be present all 4 weeks of the lab, that were also prepared for lab (i.e. finishing “pre-lab” assignments/readings). Student groups were “assigned” at the start of the semester and remained the same throughout. After identifying potential participant groups, our selection was limited to groups that fully consented to participate. In total, we worked with two groups of three to four students per GTA (~ 48 students total), in six sections of Biol 100 (Note: one group consisted of just two students after a student switched groups during the lab experience).

Data collected

During the lab sequence (see Table 1), multiple GTA-student and student-student interactions occurred as students designed unique investigations. Audio recordings from the lab were collected using “lapel” mics/recorders as well as single audio recorders, which were placed on target groups’ Tables (24 unique labs – 2160 min total: labs ranged in length from 90 to130 min). Content logs (time-stamped and organized incident-by-incident) were created next so instances ranging from ten seconds to four minutes could be catalogued (Jordan & Henderson, 1995). Incidents at this point also included short, researcher-generated descriptions and minimally transcribed phrases.

Next, all incidents within a content log were explored in order to identify exchanges for further analysis. Students’ written work, GTA digital presentations, and typed field notes (Denzin, 1978) were also collected by the first researcher or a research assistant, and these data sources were referenced when identifying incidents to transcribe (Lemke, 2012). Ultimately, the study’s research questions, and theoretical framework were considered for the selection of incidents to be fully transcribed (Charmaz, 2006). After transcription occurred, each laboratory session yielded 15–30 minutes (or 5–15 transcribed instances) of transcribed audio (both from a GTA’s lapel mic and the target groups’ audio recorders).

Additional data sources included transcriptions of two GTA focus group interviews (one each semester; Krueger & Casey, 2014) and, as recommended by Sandoval (2014), six focus group interviews with students (five to seven students per interview), which were used to garner “learners’ reflections” after completing the four-lab sequence (note: both interview protocols are in Additional file 1). Transcriptions, content logs, interviews and other artifacts were organized and analyzed in a data management program (Edwards-Jones 2014).

Data analysis

Given our emphasis on GTAs’ instructional choices implementing a shared curricular context (serving as the study’s unit of analysis; Patton 2002), we explored how specific, discursive pedagogical moves might have influenced and emboldened students’ “legitimate participation in science-as-practice” (Stroupe 2014; p. 488). More specifically, we sought opportunities wherein students were “positioned with, perceiving, and acting on, opportunities to shape the knowledge-building work in their classroom community” (Miller et al. 2018; p. 6). We recognized the need for GTAs to first foster student’s epistemic agency, which turn might permit opportunities wherein community-based knowledge construction could then be highlighted (Aydeniz and Bilican 2014). To accomplish this, we borrowed the phrases “epistemological move” and “epistemological talk” (Grinath and Southerland 2019; Lidar et al. 2006) to examine a GTA’s pedagogical moves given these actions often communicated what counted as acceptable knowledge as well as who produces it; which was important to our framework of CCE.

Initial coding

Initial, open codes were inductively generated for over 300 transcribed incidents (lasting no longer than four minutes); and yielding just over 50 unique individual codes (e.g. encouraging student autonomy). During initial coding, additional data (e.g. field notes) were examined for contextual clues that might support or contradict the creation of a given code (Lemke 2012). Brief descriptions were also developed to further define a given code when needed. Consensus among the research team at this point concerning initial codes was not sought given “all analysis is a form of interpretation and interpretation involves a dialogue between researcher and data in which the researcher’s own views have important effects” (Armstrong et al. 1997; p. 605).

Category construction

Initial code frequencies were examined for trends and three broad patterns emerged that were related to the enactment of CCE in the lab. The first category consisted of opportunities wherein student epistemic agency was either encouraged or discouraged. These moments in teaching have been described as resistance points (Manz 2015) because they represent interactions wherein students might have been positioned as agents in a knowledge producing community. The second category consisted of exchanges (occurring mostly during the argumentation session) that revealed how a mock scientific community had manifest; primarily as students dealt with their own and their peer(s)’ subjectivities. A third category included numerous instances of “playing school” in the lab that we considered deleterious to the emergence of CCE in the lab (cf. Jimenenez-Aleixandre et al. 2000). Using constant comparative methods (Glaser and Strauss 1967), all initially coded incidents were “housed” within one of these conceptual categories; most fitting in only one category, though on occasion an incident could overlap and reside within more than one.

Next, we created analytical memos that contained transcribed incidents or “ethnographic chunks” (Jordan and Henderson 1995), analytical writing, and theoretical excerpts in order to better define and organize the themes embedded within a broad conceptual category. During the process of synthesizing these memos, descriptions, insights and interpretations were often combined and used interchangeably – a common strategy with exploratory, qualitative data analysis (Wolcott 1994). When necessary, discrepant episodes were also explored to demonstrate that disconfirming instances were not overlooked (Erickson 1992). Finally, the analytical memos that we generated were distilled into more detailed themes related to CCE, which we used to structure the interpretation and presentation of our results.

Results

The final categorization of data to answer our research question included the following structures and themes: 1) encouraging student epistemic agency by focusing on important aspects of scientific inquiry and expressing much needed epistemic trust, 2) discouraging student epistemic agency by pre-emptively offering solutions that ultimately delegitimized student efforts, and 3) factors related to the implicit or explicit reference of community knowledge production and the authenticity of that community. Given the quantity of data collected, we report only a representative subset of incidents from weeks two through four (Patton, 2002; note: all 77 representative incidents can be found in Additional file 1). Lastly, exchanges portraying students “playing school” are minimally presented in the results; primarily informing our understanding of the “generic” community that emerged (as presented in the final section of the results).

Encouraging agency

The structure of the lab, its subtle complexity, sequencing and open-endedness were especially useful tools for encouraging epistemic agency among students, which is precisely why we bounded this four-week sequence. Most GTAs easily leveraged the complexity of the lab’s phenomenon as a driver to engage students – is pill bug aggregation induced via social or environmental factors (cf. Broly et al., 2014). The exchange below reveals how Nina, one of the GTAs, did this while simultaneously positioning her students as epistemic agents.

Nina: the way you can think of this (type of behavior); a social behavior would mean you see organisms in the same place intentionally, they want to be together, whereas if it’s not social they are all there together…maybe because it happens to be a good environment and they all want that good environment, they don’t care if they’re close to other pill bugs, so you’re trying to distinguish whether they are together on purpose, (or) together not on purpose…and that may mean (well), there’s a variety of ways you can do this (Additional file 1, Table 2A; Instance #2; GTA Mic, Wk. 2: Nina).

Nina appropriately framed the complexity of the phenomenon while also indicating students themselves needed to “distinguish” which factor was most influential. Nina later informed students how valuable it was for them to know “their decisions” (GTA Mic, Wk. 2: Nina) were impactful; a statement Erica (another GTA) also echoed during the focus group interview. Briefly, Erica felt students needed to be positioned as “experts” who could take “ownership” and “argue” (GTA FG Interview) their claims given they planned and executed their investigations themselves. The open-endedness of the authentic phenomenon was key to the GTAs’ positioning of students as epistemic agents because recognition of this complexity only occurred when GTAs successfully leveraged the structure of the lab sequence (as Nina had above). Students retrospectively appreciated that their decisions mattered, often expressing pleasure they got to “design (their) own experiment” (Student FG Interview).

Students recognized their instructors needed to “build them up to the point where hopefully (they) could run their own experiment” (Student FG Interview). In the exchange that follows, four students were contemplating their experimental design while discussing a contrived term (“pre-aggregated”) related to pill bug behavior. Prior to the exchange Erica visited the group three times; approximately every 5 minutes. Her initial visit revealed the group wanted to pursue an overly simplistic idea she now wanted to address. The exchange below occurred five minutes after Erica departed, upon questioning the group’s initial method to measure pill bug choice (a key variable of interest). At this point, one student had decided the group might modify their design based on the critiques received from Erica; believing they should instead include pre-aggregated pill bugs – a strategy that required them to “hide” a few pill bugs in a small cylinder-shaped container positioned within a choice chamber.

Student 1: (I think) our claim is what we think is going to happen, so we’re going to say [write] that um the lone bug ((laughing)) or I guess we should be more specific, the lone pill bug

Student 2: they’re social right…so we think that they’ll go to the aggregated bugs (in the container).

Student 3: yeah.

Student 1: ((writing while talking)) (pill bugs) will gravitate towards the, I don’t know if we would say pre-aggregated…. see this is what’s confusing me about (the) environmental (choice) because their behavior is environmental AND social, right…because they’re going towards their natural environment and is that…

Student 3: well we have their natural environment on the one side, a place they wouldn’t (necessarily) want to be in…

Student 4: but the (pre-aggregated) pill bugs are on both side (hidden in the container).

Student 1: OHHHH.

Student 3: no (that’s not right), we only have bugs in the bad side [non-preferred treatment: high luminosity], to see if they choose (their natural) environment or not.

Students 1& 4: OHHHH.

Student 1: I thought you were saying…that makes more sense, okay so we won’t put them in (to see if they) gravitate towards (both sides), okay I understand now.

(Additional file 1, Table 2A; Instance #10; Student Group Mic, Wk. 2: Erica)

Getting the group to this point took considerable effort, patience, and skill. Erica frequently transitioned from group to group like this, asking questions and providing critiques students contended with after she left. In the exchange above in the absence of the GTA, Erica’s students advanced their design to the point where they would have “meaningful data” (Student Group Mic, Wk. 2: Erica). Erica’s discursive moves combined with a trust her students would respond to her critiques positioned students as epistemic agents who could respond to uncertainties and seek their own solutions. For example, the idea to use previously grouped, fully enclosed pill bugs originated via a student’s prior experience observing pill bugs’ preference to be “in the shade” (Student Group Mic, Wk. 2: Erica). Student agency was also evidenced in the students’ creation of a new, “scientific” term (“pre-aggregated”) and their comparatively unique experimental design. Opportunities like this occurred when students fully trusted that their personal ways of knowing could be leveraged to build potentially new knowledge. In retrospect, students praised the lab and their GTAs for opportunities to do this because it permitted them to have “free reign” (Student FG Interview) and control over what transpired; an important first step towards generating and later defending a valid scientific claim.

Focusing on methods of inquiry

After students had been positioned as agentic, they next encountered predicaments and tensions that required deeper consideration. At this point (and as intended) GTAs needed to focus their student’s attention on the practices of scientific inquiry (e.g. question development, controlling variables) to encourage more expert experimental design and preparation for the upcoming argumentation session. Numerous exchanges were observed wherein GTAs used a form of re-orienting (Lidar et al., 2006) to accomplish this, usually by referring to the subtle complexities of students’ designs (e.g. dependent variable measurement) that were previously ignored, yet critically important to their work. Similarly, GTAs frequently critiqued students’ designs by prompting them to define their null hypothesis, for example, asking them to explain what no choice would mean given the proposed design idea. In each of these cases, GTAs successfully encouraged student agency while simultaneously participating as a valued community member in the knowledge construction process.

For example, Nina’s students were considering her initial query (“Can you really decipher the choice pill bugs are making with this design?”), which one student later translated for the group by making an actionable suggestion (“We may only need one shelter”). One of Nina’s students described how Nina always seemed to “lead us in the right direction” (Student FG Interview) using critique, while another commented that when Nina would “lead” her (and her peers) like this it helped her realized just how “much thought needs to be put into these (pill bug) experiments for it to go well” (Student FG Interview). Nina later noted why it was personally meaningful for her to focus on her students’ designs, claiming that if her students could “see how hard it was just to get a good pill bug experiment, then hopefully they would see how much work goes into a good cancer experiment or climate change experiment” (GTA FG Interview). Overall, when students were encouraged to refine their investigations via the critiques their GTAs provided, student groups pressed on, often attempting to execute a well-designed investigation capable of yielding potentially useful data to later present to their peers.

Students needed to believe they could develop a defensible claim regardless if they had implemented a potentially “flawed” investigation and despite feeling their claim may be unconvincing. Again, the structure and sequence of the lab attempted to teach students that scientific claims must be presented and exposed to critiques. When GTAs successfully leveraged this requirement, they did so by situating their concerns within the lab’s community. Nina for example, accomplished this by encouraging her students to be creative.

Student: we didn’t know what a 100% ideal environment would be (for the pill bugs), like would it be dry soil, would it be wet soil, we just didn’t know.

Nina: thinking that (way now is beneficial), maybe the good environment (you selected) wasn’t perfectly representative of what a “good” environment for the pill bugs would be, yes…(but now, provided you already ran the investigation) you might (still) have to refute that (potential critique from your peers), you might have to be a little bit more creative when thinking about why it [decision that you previously used as ideal an environment as necessary] was still good you used the design you used. (Additional file 1, Table 2A; Instance #22; GTA Mic, Wk. 4: Nina).

Rather than dismissing her student’s concern or minimizing the impact of the potentially flawed design (e.g. “you might”), Nina encouraged this student to recognize the utility in preemptively identifying the potential flaw. Instead of discrediting the group’s entire design and resulting data, Nina directed this student to conceptualize how she might instead justify her initial design. As previously noted, most designs were somewhat flawed given certain limitations dealing with resource availability; a non-trivial yet manageable concern when appropriately handled. Instead of dwelling on this potentially disparaging issue, Nina encouraged the group to contemplate a method for proceeding (i.e. “be a little be more creative”); an instructional strategy intended to encourage epistemic uptake that would support participation in the argumentation session.

Few GTAs exhibited the capacity to withhold unnecessary judgements in moments like this given students had already collected the data they now needed to defend. Nina and Terry displayed an affinity for this skill, oftentimes doing so by helping students understand how failing to reject their null hypothesis might benefit the wider community, a requirement we would expect to emerge in a lab promoting aspects of CCE. Directing students to think about the community and “larger picture” of aggregation behavior was beneficial because trivializing students’ investigations and data would be unhelpful at this point given students still needed to produce a defendable claim for the community to evaluate.

Epistemic trust

Expressions of epistemic trust in students often emerged when GTAs supported students as they analyzed their results given the outcome of this process would yield student’s knowledge claims. Terry exemplified this approach below using an example data set and presentation slide about termite behavior wherein he successfully shifted responsibility to his students. More specifically, Terry would convincingly describe how students (similar to practicing scientists) could employ a logical progression when interpreting their data in order to formulate a claim.

Terry: their [the other scientists] conclusion from that (study) is termites use scent to navigate, when the termites decide on a path they want to follow, they use their sense of smell, not their sense of taste or sight or touch (to assist with navigation)…now obviously there’s probably something in there like “feelers” [appendages] that might keep them from bumping into walls and such, but this (line of reasoning) is what we call logical progression….I [a hypothetical scientist] have some data, I have some background information that helps me understand what my data might be telling me, and then that leads (me) to (formulate) a logical conclusion, which I can use as my claim (Additional file 1, Table 2A; Instance #28; GTA Mic, Wk. 4: Terry).

Terry recognized this group had been struggling to conceptualize their task; namely interpreting the data collected so he re-introduced the prior example involving termites in a self-referential way (e.g. I have some background knowledge) hoping students might recognize they could also employ this strategy. Similar to the discursive moves described earlier, Terry offered necessary support here by encouraging his students to independently move forward.

Terry’s students appreciated his pedagogical supports with one student stating, he [Terry] never wanted to “give you the answer directly”; instead he used “analogies to try and help you understand” what you could potentially do (Student FG Interview). Other students commented how Terry always seemed to find a way to “get their minds working”, ensuring they were “ready to take lift off before they did this experiment” (Student FG Interview). Terry more than any other GTA used generative epistemological moves. He accomplished this because when his students were initially overwhelmed and unsure how to proceed, he would “change their focus in the meaning making process” (Lidar et al., 2006; p. 161) – typically by expressing much needed epistemic trust (Grasswick, 2010).

For example, Terry knew analyzing data was confusing for many students because he would “haunt their tables” and wait for them to “quiet down” (GTA FG Interview). When this happened, students would change focus because his “awkward presence” (GTA FG Interview) would encourage them to keep discussing their data without the need for his immediate intervention. Terry’s subtle discursive moves were successful because it indicated to his students they needed to carry on. He also commented later that he often told students he “trusted” them to accomplish this task; thereby positioning himself as a facilitator and not a “cheat sheet” (GTA FG Interview); a concern other GTAs had expressed. Overall, Terry commonly helped his students envision themselves as capable knowledge builders – an accomplishment other GTAs struggled to attain as students were instead discouraged from taking desirable epistemic “leaps” when certain complexities arose.

Discouraging agency

Counter to exchanges in the prior section, there were many instances during which GTAs essentially withheld agency from their students, particularly when students proposed what GTAs perceived of as flawed methods of inquiry. As a result, GTAs readily provided “correct” solutions for students to employ, which simplified their inherently complex task and ultimately delegitimized students’ efforts.

Offering solutions

One GTA admitted afterwards that during the lab they felt pressured to “give students answers” instead of “asking good questions” (GTA FG Interview). This primarily happened because simply letting students’ designs fail meant they would later need to defend potentially indefensible data. As a result, GTAs felt the need to intervene and take on the necessary “epistemic considerations” (Berland et al., 2016), which we contend informed students that their individualized decisions and efforts were less important in supporting the community. Students also exhibited a level of disengagement and helplessness at times, which also convoluted this issue. One GTA later noted he felt students were “doing everything possible not to think” (GTA FG Interview), a belief that made it difficult for him to support his students’ capacity to troubleshoot when challenges emerged (Gardner & Parrish, 2019). Given their limited teaching experience, this problem created distressing tensions for many GTAs that were often alleviated by supplying groups with easy-to-implement solutions at key transitional moments (i.e. instructional moves; Lidar et al., 2006). The tension between instructor and students in these interplays is exemplified below. Prior to this exchange Holly had suggested the group re-consider their already-recorded research question.

Student: ((reading her group’s question out loud)) are isopods more likely to use a shelter that has been previously used by them or other (isopods) like them…and we figured that’d be related because they might know (or recognize) the pheromones (previously released in the shelter).

Holly: so that would be a good question, but you would also need some kind of environmental thing [variable] going on also, right…so, you could vary light and dark, (maybe) food (availability), like do they have to choose between food and friends, home soil or foreign soil, cold or hot… there’s a couple different ways (to do this)…so, you can’t just have (a choice between) nothing or your pill bug’s friends…‘cause they’ll probably go towards their friends [familiar pill bugs]…so, you have to give them [the pill bugs] a couple of options and see what they choose and you don’t just have to do Y-mazes you can do like cross mazes, (you have) different options (Additional file 1, Table 2B; Instance #5; GTA Mic, Wk. 2: Holly).

In this exchange, Holly emphasized a “correct” investigation her students could carry out given their interest in exploring pheromones. Holly frequently struggled “to effectively press” (Grinath & Southerland, 2019) her students in situations like this; instead offering explicit “suggestions” her students could willingly follow (Field Notes: Wk. 2 – Holly). In this exchange, she informed students of a sanctioned method for carrying out their investigation by aiming to keep them from running an investigation that tested just one of the necessary variables. As a result of her explicit interference, she likely prevented the group from having to defend indefensible data. However, by solving this problem (e.g. listing simple variables such as light availability for the group to add to their design) Holly also insinuated a need for the group to rely on her expertise, which likely discouraged students from pursuing their own ideas and “taking credit” for the outcomes that resulted: whether positive or negative.

In a related exchange (below), Roger employed a similar move after discovering a group of students struggling to clearly define their dependent variable.

Roger: another thing you could do, you could let the experiment run for 10 minutes, something like that, and then you could have a baseline number for what (constitutes) an aggregate (of pill bugs), so say you think…

Student: like a cluster…

Roger: yeah, so let’s say three pill bugs is (defined as) an aggregate and then over those 10 minutes you can count how many times they aggregate.

Student: so that’s what you were saying (earlier)…

Roger: so then that way you can have an experimental (trial), control (trial), and then you can mark the number of aggregates per trial.

Student: like on the Y-axis you could have the number of clusters that they formed and X-axis across is time…so you’ have (data) across time (showing) how many (aggregates) were formed.

Roger: well, I was thinking you would want to do it across treatment, does that make sense, (for example) in the experimental treatment if they aggregated eight times on average of the two replicate (trials), but in the control they only aggregated three times…something like that you know...does that make sense?

student: yep.

Roger: you wouldn’t want to do it over time (Additional file 1, Table 2B; Instance #8; GTA Mic, Wk. 2: Roger).

Like Holly’s decision above, Roger elected to provide this group with a clear-cut path forward, beginning with an experimental design idea and ending with an analysis and data presentation strategy. Again, instructional moves of this nature prevented students from fully engaging in and experiencing the difficulties of developing and testing a scientific hypothesis (a primary objective of the lab sequence). Unfortunately, additional instructional moves mirroring this approach were documented elsewhere in the lab (cf. Additional file 1, Table 2B). This broader theme emerged as GTAs felt compelled to leverage their own expertise (as opposed to their students), which was accomplished by simply “telling” students what to do, rather than supporting their capacity to generate meaningful and potentially innovative ideas to pursue.

Delegitimizing students’ efforts

GTAs deterred students from pursuing their original design ideas when they hinted students’ ideas might be “tough” or “troublesome” to implement (Additional file 1, Table 2B; Instance #7; GTA Mic, Wk. 2: Nina). In response to these descriptors, students were unsurprisingly willing for their now discredited ideas to be replaced by those of the GTA – a responsibility most GTAs accepted at some point. During one specific instance after receiving Nina’s suggested solution, a student commented that carrying out this “new” investigation would “not really be too hard” (Student Group Mic, Wk. 2). The exchange below epitomizes how delegitimization occurred. The instance took place after a student retroactively discovered a disappointing flaw in his group’s design that he needed Holly to attend to.

Holly: yeah, I mean really you guys needed a larger data set, you needed more replicates in order to be able to definitively say one thing [claim] or the other…so, it’s not too surprising that you would have a couple of things [results] that don’t really make sense.

Student: so, you’re saying we don’t have enough trials?

Holly: no…well yes…

Student: we didn’t have enough time.

Holly: no (you’re right), that is what I’m saying, but like no no no no that’s what I’m saying, but it’s not your fault, this was like a herculean task, right (Table 2: Instance #16; GTA Mic, Wk. 4: Holly).

After initially validating this student’s feelings of unfairness, Holly wavered between affirming and delegitimizing his effort (i.e. “no, well yes”), eventually claiming the “herculean task” would not be resolved here anyway. Given every student group in the lab conceivably experienced similar constraints (e.g. limited time to run more trials), Holly’s decision to state the group “did not run enough trials” is especially problematic; given the group still needed to produce a defensible claim for the argumentation session. We again contend that subtle (yet important) decisions like this (when executed repeatedly) delegitimized students’ efforts as they eventually uncovered just how poorly they were at “playing a scientist” meaning they needed to instead rely on getting their GTA’s approval.

In student interviews, Holly’s students expressed a need for her approval before “running (their) experiment” (Student FG Interview), indicating an epistemic dependence on Holly as the “expert” (Raviv et al., 2003). Holly later confirmed this occurrence, stating some students wanted to simply “get the answers from us [GTAs]” (GTA FG Interview). In interviews, other students expressed a willingness for and reliance on GTAs to offer direct guidance during the lab – claiming they wanted to merely “follow an experiment” so they would not do something “wrong” (Student FG Interview). Other students exhibited extreme disengagement, one claiming she came to the laboratory to “do the work, then leave” (Student FG Interview). The combined impacts of some students’ negative attitudes made one GTA believe these students would “do anything they could to not critically think about what’s going on” (GTA FG Interview). To be clear, we do not ascertain that GTAs knowingly set out to delegitimize their students’ efforts; but instead posit the struggles that materialized created tensions that GTAs (not students) readily solved given their expertise and influential role over students in the lab.

Emergence of community

Students were guided to leverage the community primarily as a means for garnering peer critique. Depending on the instructional framing, this was accomplished in a genuine or generic manner. More commonly, GTAs convoluted the value of the community because the mechanisms holding it together were loosely bound.

Emergence of a genuine community

When GTAs highlighted the value and role of the laboratory community they did so implicitly, typically prior to the argumentation session before students made counterclaims about their peer’s work. Occasionally, GTAs suggested students would need to reference relevant scientific literature, primarily as a means for justifying subjectivities that were exposed upon hearing their peer’s counterclaims. Many GTAs noted students were attempting to interrogate their peers’ “reasoning” when providing potential “critiques” (Additional file 1, Table 2C; Instance #13; GTA Mic, Wk. 4: Nina). Other GTAs employed similar strategies, suggesting students contemplate whether their peers’ “justifications make sense” or to consider if a given claim was supported by a “real, rational argument” (Additional file 1, Table 2C; Instance #17; GTA Mic, Wk. 2: Terry). Similarly, GTAs suggested students determine if they could potentially “interpret the data (presented) in a different way” (Additional file 1, Table 2C; Instance #11; GTA Mic, Wk. 4: Alex); a discrepancy they might want to mention upon hearing a given argument. Others suggested students ask questions concerning the setup of a given trial (e.g. “did they change out the filter paper”), though students were somewhat dissuaded from doing this because all their designs contained small, somewhat uncontrollable flaws. During the focus group interview, Nina discussed how students believed their investigations were “terrible” after hearing multiple critiques and as a result, she needed to convey how peer feedback might be beneficial.

Nina: (during the argumentation session) you can get kind of into the details of whether the claims your peers make about soil fully relate to (the) pill bugs (they tested) and whether there might be any extra variables (they’re not accounting for)…(you could also consider) whether you would always make the same claim from the data (they presented)…not to be critical or harsh (but) just to give them things to think about because there’s drawbacks to every study and so (you can) think about these drawbacks (Additional file 1, Table 2C; Instance #10; GTA Mic, Wk. 4: Nina).

Shortly after this instance, Nina described how the comments students provided should “help” their peers; however, her notion that providing “non-critical” feedback (i.e. “not to be harsh”) was somewhat contradictory provided the nature of eliciting critical feedback. Given explanations such as this concerning the value of the community, GTAs struggled to precisely explain how the argumentation session represented a forum for desirable and necessary public critique (e.g. embracing peer skepticism; Duschl & Osborne, 2002). In the following example, Nina further explained how students might provide useful peer critiques.

Nina: just think about it the way we did (it) with the allelopathy counterclaims, where some people [other scientists] critiqued the idea that there was no soil (in the devices we used, this was important) because soil can impact how chemicals might pass through (the soil), you know, (these are) things [potential challenges] that don’t make your experiment wrong, but they might affect how it applies to the way this would work for pill bugs crawling around outside (in their natural environment) (Additional file 1, Table 2C; Instance #6; GTA Mic, Wk. 4: Nina).

Nina attempted to “stretch” the community here in a couple of ways; first, she temporally extended it by referencing a previous experience she had with students in the lab, and secondly, she described how a broader community (i.e. scientists via peer-reviewed research) might comment on their community’s findings – essentially critiquing the study’s design (i.e. the lab environment differs from the natural environment). Other GTAs also suggested students reference relevant literature or “outside sources” (Additional file 1, Table 2C; Instance #5; GTA Mic, Wk. 4: Erica), though these instances were limited. Roger once suggested students defend their “methodology or sample size” (Additional file 1, Table 2C; Instance #7; GTA Mic, Wk. 4: Roger) using relevant literature when combatting potential counterclaims. Retrospectively, we were able to uncover how some GTAs genuinely leveraged the community like this as a viable and useful forum for supporting students’ efforts leading up to and during the argumentation session, though continuity of this approach throughout any one GTA’s lab was uncommon. Instead, we discovered isolated attempts to leverage the community that had they been more commonly employed with students would have better defined the value and purpose of the community.

Inter-group exchange

The exchange below revealed how Erica encouraged a meaningful inter-group exchange. Just prior, Erica had prompted a student to explain why another group’s experiment did not “technically prove” what they claimed it had.

Student 1: because we don’t really know if the pill bugs recognize it [the pheromones] or not…it could be that they go to the familiar soil and there’s not actually any other pill bugs for them to aggregate with (perhaps), so then there’s no reason for them to stay in that environment.

Erica: that’s great…that was wonderful.

Student 1: thank you.

Erica: that was a really good point, so you didn’t actually… even though there was no preference, they [the pill bugs] still might be able to recognize (some form of) familiarity, but (here) it was not (necessarily) relevant to them in this situation [the setup of their design]…they [the pill bugs] didn’t care. How did you guys determine that there was no preference (in your experiment)? (Additional file 1, Table 2C; Instance #2; Student Group Mic, Wk. 4: Erica).

In this exchange, student 1 explained how another group appeared to simply assume a choice had been made when one could easily argue “no preference” had been made in response to the treatments available. After praising the student, Erica successfully transitioned this discussion by next focusing on student 1’s design idea, which another student (below) quickly clarified. In an interesting discursive move, Erica next prompted the pair to discuss their recommendation further with the original group that had tested if pill bugs could detect a difference in native or non-native soil – a detection they believed could be measured via time.

Student 2: if you keep it [no preference] you’re going to have to state what no preference is…(for example) does that mean they [the pill bugs] just stayed at the (same) place and didn’t move left or right, I just didn’t know what no preference meant (in your design’s setup).

Student 1: maybe if you all kept track of how much time each one spent in each side of the choice chamber, it might be a better way to show your data, to show how long they [the pill bugs] spent in each one (side of the chamber) (Additional file 1, Table 2C; Instance #2; Student Group Mic, Wk. 4: Erica).

In these connected exchanges, two students attempted to fix a flaw in a peer’s design because Erica convinced them their critique had merit and could therefore be useful; a move that positioned students as knowledgeable experts who were participating in an authentic community (Engle, 2006). Unfortunately, instructional moves that encouraged inter-group exchanges capable of bolstering community-level contributions to knowledge construction were limited.

Students’ recognition of the community

Focus group interviews with students were somewhat mixed regarding the importance of community composition and function for knowledge production. One student stated she typically asked the peers in other groups to “define (their definition for) aggregation” (Student FG Interview) because she found most groups had “different definitions” for this key dependent variable. From here, she felt she better understood their aims, which in turn informed her own critiques – a conceptualization that could be construed as recognizing the utility of community in the creation of knowledge. On other occasions, students described how participation in the argumentation session was beneficial because they could “see what other people were doing (or) how other people (were) think(ing)” (Student FG Interview). Students also discussed how scientists never work in isolation as they are required to “compare” (Student FG Interview) their ideas and beliefs with other scientists. Additionally, students often commented on intra-group dynamics, claiming that hearing multiple “viewpoints” would be a “positive” because there would be “more input” when designing their experiments. As a result, students could “feed off” (Student FG Interview) each other more readily and this was advantageous. When asked how it would be possible for two scientists running the same experiment to get different results (see Additional file 1), one student in Holly’s lab commented that having “new eyes on the same research” could be beneficial in instances like this because everyone has “their own perspective” so there would be value in running multiple trials with “different groups” (Student FG Interview). Students in other sections expressed somewhat similar ideas, claiming that when you have multiple people, you have access to a larger base of knowledge, and this might help “narrow it down” (Student FG Interview) when interpreting the outcomes of an experiment. Given the content of these responses, students did value the community; though they did so because it enabled excess access to the knowledge claims being made, thereby limiting subjectivities – a value we contend is loosely tied to a conceptualization that scientific knowledge might be best constructed when a wide array of people are involved. Again, very few references to this value were explicitly discussed with students in the lab, which may have emerged given the lab’s structure also required them to develop valid knowledge claims.

Explicit reference to community

Most GTAs were hesitant to explicitly discuss community-driven knowledge production with their students (e.g. Yerdelen-Damar & Eryılmaz, 2019). Stated differently, reference to the practices scientists commonly employ that potentially attend to the tents of CCE were sporadic. This finding emerged, despite direct suggestion GTAs should explicitly point out for students how certain practices (e.g. scientific argumentation) enable communities to produce and vet various scientific claims. Students did comment that GTAs often “encouraged” them to talk with other groups and make “comparisons” (Student FG Interview) but, in general GTAs did little more to explicitly emphasize the community’s value. The exchange below represented the single instance wherein an explicit reference occurred.

Terry: but I see what you mean though, if everyone just proposed the same thing [idea] you’d never get different perspectives…do you guys think perspectives like separate perspectives are useful in science?

Students: yeah.

Student: yeah, because that way you learn more…(rather) than just having one person’s (idea) that everyone’s agreeing (with)…because what if they’re [that one person] wrong…and then everyone just agrees with this wrong thing [idea/explanation] (Additional file 1, Table 2C; Instance #9; GTA Mic, Wk. 4: Terry).

Just prior, Terry’s students were discussing ideas for recording pill bug choice. Upon hearing his students’ response to his initial prompt, Terry stated a common “joke” involving Nobel prize winners; claiming that oftentimes graduate students win the award, yet “old people” (i.e. professors) receive it. He followed this example by saying that getting “a bunch of people together who think differently” can be extremely valuable because these new ideas can then be tested.

Elsewhere, Terry described an understanding that “developing a scientific idea” involved “trial-and-error” (GTA FG Interview) and he felt his students mistakenly believed that new understandings “just kind of come up” (GTA FG Interview) after spending a couple of weeks in a lab. He claimed he often needed to explain that when a dilemma needed to be addressed in science, “dozens of people in labs all over the place for years (have) to come up with some rough idea they all understand is kind of wrong, but is likely closer than the idea they had before” (GTA FG Interview). Terry, much more than the other GTAs, expressed his belief scientific knowledge is constructed in communities and there were standards against which knowledge was held in that community. The above exchange represented the most explicit expression of a social epistemology of science in alignment with CCE that we observed.

Generic community

The generic community served one primary purpose: to help students generate counterclaims for their lab report. As opposed to harnessing the benefits of the community (e.g. promoting diversity of content expertise), student interactions were instead confined by superfluous rules to follow (i.e. “don’t be mean”). With students concerned about following the rules of the pseudo-community, they became focused on following various procedures in order to make correct claims. Alex, for instance, justified students’ participation in the argumentation session by connecting it with the peer review process and student’s final lab reports, which were unavoidably tied to their grades.

Alex: if you don’t (receive any) critiques then you won’t know what is wrong with your experiment…in the process of science if you’re going to submit any work to be published or anything like that, people are going to review it and they’re going to be tearing it apart and this [the review process] makes your paper better, okay…so this is going to help your lab report…if you don’t critique someone else’s lab report, then they won’t be able to change anything (in their report) and then they’ll get a lower grade, so you’re doing them a favor by critiquing them, you’re not being mean, okay…(Additional file 1, Table 2C; Instance #25; GTA Mic, Wk. 4: Alex).

With this whole-class announcement, Alex misconstrued the purpose for exposing students’ claims to diverse criticisms during the argumentation session. He continued with this announcement, suggesting students not “take it personally” because ultimately the critiques received were intended to be “helpful”; a suggestion with an alternative purpose: namely, Alex needed students to view themselves as independent “reviewers” and understand that failure to encompass this role would penalize their peer’s “grades”.

We required students to respond to two or more critiques in writing because past iterations of the argumentation session produced lackluster results, as students took little care to document the criticisms they received because this information did not appear to inform their efforts. In the above instance, Alex clearly utilized this modification/scaffold in an unintended manner and the misinterpreted message that resulted shaped students’ view of the community.

Over-scaffolding

We documented other challenges associated with the curricular resources we created that aimed to cultivate a more genuine community. For example, students were provided a list of potential questions to ask during the argumentation session to encourage meaningful student-to-student dialogue (e.g. Does the group make too large of a leap from their evidence to their interpretation?; cf. Additional file 1). Unfortunately, this support was translated by some GTAs as a “list of questions” that needed to be answered on students’ “critique sheets”. From here, students believed they could “look at these critiques” and “fill out revisions” on their forthcoming lab reports (Additional file 1, Table 2C; Instance #23; GTA Mic, Wk. 4: Alex). During the argumentation session in the exchange below, one student received a confusing response after asking a question from the “list”.

Student 1: did you all have to make any assumptions in your experiment?

Student 2: assumptions…I mean we thought they [the pill bugs] would go towards the feces, that’s about the only assumption I can think of (Additional file 1, Table 2C; Instance #27; Student Group Mic, Wk. 4: Erica).

In this brief exchange, Student 2 misunderstood the intent the question; seemingly thinking she had been prompted to discuss any predictions made prior to running the investigation. Neither student likely exhibited much interest hearing the answer to the original question given someone else created it (i.e. it was from a “list”). Student 2 later revealed her own disengagement from the activity, stating her group had very “inconclusive results” because their pill bugs often “escaped” the choice chamber so really, they “just watched bugs walk around” (Additional file 1, Table 2B; Instance #17; Student Group Mic, Wk. 4: Erica).

Students’ levels of engagement in the laboratory varied greatly and GTAs believed some lab sections were “curious” (GTA FG Interview), while others would get “more nervous about their grades” (GTA FG Interview). As a result, GTAs were constantly beckoned to “check in” and answer trivial student questions related to “points” and grades. Alex believed some students failed to care much about some aspects of the lab sequence because certain requirements might not “increase their grade” (GTA FG Interview). Roger noted his students felt he would “grade them poorly” (GTA FG Interview) because they were novices at writing lab reports. A form of “forced” or “unauthentic” (Manz, 2015) participation in science was also evident in the trivial counterclaims some students received during the argumentation session. One student for instance, stated he had to justify that he “faced” (Student FG Interview) or pointed pill bugs in varying directions every trial in order to counter the critique the pill bugs he used simply moved in the direction they were initially facing; thereby making his measurements and interpretations problematic.

Our own eagerness to promote community-driven knowledge production resulted in forced or over-scaffolded experiences wherein the community was not expected to generate knowledge, but rather to function as a means for carrying out various procedures, which inevitably produced student disengagement. As a result, students were directed to simply submit work that met specific “grading” criteria and the involvement of the community would influence the outcome of this work. The overriding force of students “playing school” in the lab (Jimenenez-Aleixandre et al., 2000) and GTAs harnessing the community to play this “game”, delegitimized the mechanisms of and purpose for the argumentation session; an activity and event we originally developed to foster the creation of a community grounded in the key components of CCE (e.g. provide a relevant context for collaboration and productive student-to-student dialog).

Discussion

In the context of an introductory non-majors lab experience, a knowledge-producing community that strives to meet the tenets of CCE must first start with students exercising epistemic agency (Forman et al. 2017). This initial step requires moderately expert instructional support because a culture of epistemic trust must be established and nourished, so students feel free to exercise their own agency (Grasswick 2010). Our results provide insights into this initial step as well as the overall emergence of a mock knowledge-producing community in the lab as facilitated by novice instructors. While some GTAs discursively encouraged student agency at key transitional points, others regularly discouraged it primarily so students would not have to defend weak claims in the forthcoming argumentation session. Despite the lab’s sequencing, scaffolding, and overall structure, the emergence of a mock community was problematic as students’ efforts were subtly delegitimized by GTAs at key transitional moments (thereby discouraging agentive shifts; Brickhouse 2008). Additionally, certain instructional supports were leveraged in unforeseen ways. Nevertheless, the results highlighted in this case study provide useful implications and future research opportunities for studies targeting non-major laboratories as a context for promoting CCE and scientific literacy (Allchin 2014; Kelly 2014). Table 3 summarizes the three instructional moves presented in the results, with the first two serving as potential starting points for creating professional development opportunities for GTAs tasked with lab-based teaching responsibilities focused on experimental design and argumentation.

Table 3 Summary of the instructional moves observed in the laboratory

Epistemic agency and epistemic dependence

Using the phenomenon of pill bug aggregation, students in Biol 100 were presented an inquiry-based opportunity that required more than cognitive considerations; rather, it required epistemic assumptions about the quality of evidence and legitimacy of interpretations (Gardner & Jones, 2019; Kitchener, 1983). When GTAs leveraged the complexity of the phenomenon while reassuring students they could be successful students were effectively primed to become epistemic agents. This often occurred through various discursive moves aimed at enhancing students’ experimental designs and analysis strategies.

Some GTAs successfully encouraged deeper student engagement with inquiry through multiple discursive moves that positioned students as authors, through what Engle (2006) termed interactional framing. In Engle’s (2006) study of fifth graders, instructors successfully encouraged students to adopt a knowledge “generative” role; one in which they were continuously positioned as active, community-based agents involved in open-ended knowledge construction. Once teachers accomplished this agentive shift, it helped students transfer their understanding across contexts so new knowledge could be constructed. We witnessed this in the lab, for example when Erica’s students expressed pride in having “invented” a new, personally meaningful term: pre-aggregated.

In contrast, students also commonly exhibited epistemic dependence on GTAs, especially when easy-to-implement solutions were offered (e.g., Fricker, 2007). When GTAs maintained themselves as the only epistemic authority in the lab, students were prevented from learning the mechanisms for acquiring/developing new knowledge (Lidar et al., 2006). Once students were positioned as epistemically dependent, the community was negatively impacted because students no longer cared about defending a claim an authority figured had already approved for them; albeit this approval came about in a somewhat indirect way (e.g. GTAs effectively analyzing students’ data for them so a claim could be made).

The results also confirm how scaffolding can be a “double-edged sword” (Jimenenez-Aleixandre et al. 2000) within these spaces as the supports offered were transformed into trivial procedures (Berland et al. 2016); a result that again would foster epistemic dependency as students merely “did the lesson” without engaging in meaningful knowledge construction. We identified this mishap as over-scaffolding because a support we previously determined might benefit students was inadvertently relied upon and prioritized, producing “forced” experiences that thwarted students’ actions and shifted their focus towards meeting the expectations of the instructor (Forman et al. 2017).

A few GTAs also used student’s final “lab reports” as motivation for meeting certain “checklist” criteria; a decision that may be especially impactful in a non-science-majors course. Knight and Smith (2010) found non-science and science majors have different attitudes toward science courses; non-majors (more so than majors) frequently sought “correct answers” in the face of complex open-ended problems. When reifications of practices (i.e. “surface features”; Scardamalia and Bereiter 2006) were adopted as the object of teachers’ or students’ activity GTAs became frustrated and opted to discourage epistemic agency among their students. The possibility also remains some GTA’s emphasis on the lab reports caused students to participate in pseudo-argumentation; ultimately aiming to please their instructor and merely earn “good” grade (Berland and Hammer 2012).

Uncovering the Community’s purpose in the lab

We instituted the final argumentation session so students could “question, justify and evaluate their own and other’s reasoning” (Duschl & Osborne, 2002; p. 44) in a mock scientific community. Instead, what emerged were two manifestations of community, a “genuine” community in which students shared ideas and created knowledge partially in line with the tenets of CCE, and a “generic” community, or loosely defined, non-inclusive attempt at “doing the lesson”. Genuine community was supported by GTAs who referenced student’s previous work in the lab along with relevant scientific literature in order to better frame students’ interactions while connecting seemingly disparate pieces of a larger community together. Interactions within the generic community contained no such resemblances or attempts to foster a mock community.

Overall, we uncovered more instances of a “generic” community wherein the interactions observed resembled a performance rather than deep engagement in knowledge production (Brickhouse, 2008). When GTAs withheld agency from students, it inhibited their capacity to envision how their efforts mattered and prevented them from locating themselves in wider community of knowledge producers. The generic community also persisted because GTAs failed to explicitly discuss the community’s role in producing objective knowledge; perhaps revealing that they themselves lacked understanding concerning the value of critique and the links among inquiry, argumentation and knowledge production in a scientific community (Fuselier et al., 2019). The inability to foster student agency in the classroom community represents one of the study’s most significant results as it highlights the potential usefulness of CCE as a framework for understanding how science literacy might be further operationalized in a non-majors lab.

Inclusivity was also conspicuously absent from the classroom community, a finding further confirmed via students’ final interview responses (i.e. scientists need to simply run lots of trials) with none explicitly highlighting the significance of diverse voices to knowledge construction (Longino, 2002). Future research might explore how and if a collective community emerges in the lab when GTAs are provided much needed (yet difficult to attain) support. The support provided might target more dialogic instructional moves (cf. Grinath & Southerland, 2019) that in turn foster development of a more genuine community; similar to reforms being advocated for in K-12 science classrooms within the U.S. (Gardner & Parrish, 2019; Miller et al., 2018).

Limitations

The results of the case study presented may not be generalizable, yet we contend the results presented provide valuable insights into the challenges of an often overlooked, very common, and important instructional context. Additionally, we did not explicitly connect individual GTA’s epistemic beliefs with their instructional moves in the current study – though research shows beliefs influence instructional practice (cf. Maclellan 2015). We initially hypothesized that individual conceptions of knowledge and knowledge production (Hofer and Pintrich 1997) might provide further insights regarding instructional choices. However, we quickly discovered the categorizations we attempted to apply (e.g. embracing knowledge as being absolute or existing in nature; Fuselier et al. 2019) did not inform our analysis; mainly because contradictions in instructional approach emerged regularly (e.g. the same GTA discouraging agency in one situation and encouraging it in a similar one). Future research might qualitatively explore why contradictions emerge while also examining how and when they get resolved (McFadden 2019). Lastly, we did not analyze how the composition of students’ grouping might have influenced participation in the lab. The possibility remains this variable influenced students’ willingness to exert agency, which again would highlight an opportunity for future research.

Conclusions

Non-majors labs taught primarily by inexperienced instructors present special challenges to the student experience of participation in a mock scientific community and the current study highlights CCE as a valuable framework within which to examine this process. If students are to grasp the “authentic generation of scientific knowledge” (Ford, 2008), lab instructors require targeted preparation for fostering and facilitating an atmosphere of epistemic trust and full participation in an inclusive community. If instructors themselves do not understand the utility of student epistemic agency and community construction, it is unlikely that students will have the experience of science-as-practice as intended. Encouraging agency and promoting a community that adheres to at least some of the tenets of CCE requires expert pedagogical skill and is likely challenging even for seasoned teachers. Including intentional instruction in social epistemologies of science and their manifestation in the lab, with a focus on agency and inclusivity, in GTA professional development would be a step in the right direction. This instruction should foreground the importance of inclusivity in knowledge production to, hopefully, encourage instructors to position all students as knowledge producers. Further, instructional supports, including instructor discursive moves and the structure of the lab lesson, must be specifically designed to move students away from simply “doing the lesson” or concentrating on “the grade”. There is a fine balance between instilling a sense of importance that motivates students to fully participate in a community of knowledge producers and over-scaffolding to “force” this community to emerge. One way to encourage this would be to have students generate useful questions they might ask their peers and to explicitly reflect on their position in the community. Development of “competent outsiders” requires more than a tangential understanding of science and this is currently in the hands of our future faculty; one way to enhance student scientific literacy is to enhance GTA understanding of the social epistemologies of science.

Availability of data and materials

The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.

Notes

  1. 1.

    Pill bug is a common name for Armadillidium vulgare, also known as woodlice.

Abbreviations

GTAs:

Graduate Teaching Assistants

U.S:

United States

CCE:

Critical Contextual Empiricism

References

  1. AAAS (2011). Vision and change in undergraduate biology education, N.S. Foundation, Editor. Washington, D.C: American Association for the Advancement of Science.

    Google Scholar 

  2. Abd-El-Khalick, F., Waters, M., & Le, A. (2008). Representations of nature of science in high school chemistry textbooks over the past four decades. Journal of Research in Science Teaching, 45(7), 835–855.

    Google Scholar 

  3. Allchin, D. (1999). Values in science: An educational perspective. Science & Education, 8(1), 1–12.

    Google Scholar 

  4. Allchin, D. (2014). From science studies to scientific literacy: A view from the classroom. Science & Education., 23, 1911–1932.

    Google Scholar 

  5. Armstrong, D., Gosling, A., Weinman, J., & Marteau, T. (1997). The place of inter-rater reliability in qualitative research: An empirical study. Sociology, 31(3), 597–606.

    Google Scholar 

  6. Aydeniz, M., & Bilican, K. (2014). What do scientists know about the nature of science? A case study of novice scientists’ views of NOS. International Journal of Science and Mathematics Education, 12(5), 1083–1115.

    Google Scholar 

  7. Barton, A. C., & Osborne, M. D. (1998). Marginalized discourses and science education. Journal of Research in Science Teaching., 34, 339–340.

    Google Scholar 

  8. Berland, L. K., Schwarz, C. V., Krist, C., Kenyon, L., Lo, A. S., & Reiser, B. J. (2016). Epistemologies in practice: Making scientific practices meaningful for students. Journal of Research in Science Teaching, 53, 1082–1112.

    Google Scholar 

  9. Berland, L. K., & Hammer, D. (2012). Framing for scientific argumentation. Journal of Research in Science Teaching, 49(1), 68–94.

    Google Scholar 

  10. Borgerson, K. (2011). Amending and defending Critical Contextual Empiricism. European Journal for Philosophy of Science, 1(3), 435–449.

    Google Scholar 

  11. Brickhouse, N. (1994). Bringing in the outsiders: Reshaping the sciences of the future. Journal of Curriculum Studies, 26(4), 401–416.

    Google Scholar 

  12. Brickhouse, N. (2008). Should the sociology of science be rated X? In Teaching scientific inquiry, (pp. 95–98). Brill sense.

  13. Broly, P., Devigne, L., Deneubourg, J., & Devigne, C. (2014). Effects of group size on aggregation against desiccation in woodlice (isopoda: Oniscidea). Physiological Entomology, 39, 165–171.

    Google Scholar 

  14. Charmaz, K. (2006). Constructing grounded theory: A practical guide through qualitative analysis. In D. Silverman (Ed.), Introducing qualitative methods. Thousand Oaks: Sage.

    Google Scholar 

  15. Denzin, N. K. (1978). Sociological methods. New York: McGraw-Hill.

    Google Scholar 

  16. Duschl, R. A., & Grandy, R. E. (2008). Reconsidering the character and role of inquiry in school science: Framing the debates. In R. A. Duschl, & R. E. Grandy (Eds.), Teaching scientific inquiry: Recommendations for research and implementation, (pp. 1–37). Rotterdam: Sense Publishers.

    Google Scholar 

  17. Duschl, R. A., & Osborne, J. (2002). Supporting and promoting argumentation discourse in science education. Studies in Science Education., 38(1), 39–72.

    Google Scholar 

  18. Edwards-Jones, A. (2014). Qualitative data analysis with NVIVO. Journal of Education for Teaching, 40(2), 193–195.

    Google Scholar 

  19. Engle, R. (2006). Framing interactions to Foster generative learning: A Situative explanation of transfer in a Community of Learners Classroom. The Journal of the Learning Sciences, 15(4), 451–498.

    Google Scholar 

  20. Erickson, F. (1992). Ethnographic microanalysis of interaction. In The handbook of qualitative research in education, (pp. 201–225).

    Google Scholar 

  21. Feinstein, N. W., Allen, S., & Jenkins, E. (2013). Outside the pipeline: Reimagining science education for nonscientists. Science, 340(6130), 314–317.

    Google Scholar 

  22. Ferzli, M., Morant, T., Honeycutt, B., Warren, S. E., Fenn, M., & Burns, B. (2012). Conceptualizing graduate teaching assistant development through stages of concern. In Working theories for teaching assistant development, (pp. 231–275).

    Google Scholar 

  23. Fisher, D., & Frey, N. (2013). Better learning through structured teaching: A framework for the gradual release of responsibility. Alexandria: ASCD.

    Google Scholar 

  24. Ford, M. (2008). Disciplinary authority and accountability in scientific practice and learning. Science Education, 92(3), 404–423.

    Google Scholar 

  25. Forman, E. A., Ramirez-DelToro, V., Brown, L., & Passmore, C. (2017). Discursive strategies that foster an epistemic community for argument in a biology classroom. Learning and Instruction, 48, 32–39.

    Google Scholar 

  26. Fricker, M. (2007). Epistemic injustice: Power and ethics of knowing. Oxford: OUP.

    Google Scholar 

  27. Fuselier, L., McFadden, J., & Ray King, K. (2019). Do biologists’ conceptions of science a as social epistemology align with critical contextual empiricism? Science & Education. https://doi.org/10.1007/s11191-019-00084-8.

  28. Gardner, G. E., & Jones, M. G. (2011). Pedagogical preparation of the science graduate teaching assistant: Challenges and implications. Science Educator, 20(2), 31–41.

    Google Scholar 

  29. Gardner, G. E., & Parrish, J. (2019). Biology graduate teaching assistants as novice educators: Are there similarities in teaching ability and practice beliefs between teaching assistants and K–12 teachers? Biochemistry and Molecular Biology Education, 47(1), 51–57.

    Google Scholar 

  30. Glaser, B., & Strauss, A. (1967). The discovery of grounded theory: Strategies for qualitative research, (p. 81). London: Wiedenfeld and Nicholson.

    Google Scholar 

  31. Grasswick, H. E. (2010). Scientific and lay communities: Earning epistemic trust through knowledge sharing. Synthese, 177(3), 387–409.

    Google Scholar 

  32. Grinath, A. S., & Southerland, S. A. (2019). Applying the ambitious science teaching framework in undergraduate biology: Responsive talk moves that support explanatory rigor. Science Education, 103(1), 92–122.

    Google Scholar 

  33. Hashweh, M. Z. (1996). Effects of science teachers’ epistemological beliefs in teaching. Journal of Research in Science Teaching, 33(1), 47–63.

    Google Scholar 

  34. Havdala, R., & Ashkenazi, G. (2007). Coordination of theory and evidence: Effect of epistemological theories on student’s laboratory practice. Journal of Research in Science Teaching., 44(8), 1134–1159.

    Google Scholar 

  35. Hofer, B. K., & Pintrich, P. R. (1997). The development of epistemological theories: Beliefs about knowledge and knowing and their relation to learning. Review of Educational Research, 67(1), 88–140.

    Google Scholar 

  36. Honeycutt, B., Ferzli, M., Morant, T., & Egan Warren, S. (2010). An interdisciplinary approach to graduate TA training: A reflection of best practice. Studies Grad Prof Stud Develop, 13, 138–152.

    Google Scholar 

  37. Irzik, G., & Nola, R. (2014). New directions for nature of science research. In International handbook of research in history, philosophy and science teaching, (pp. 999–1021). Dordrecht: Springer.

    Google Scholar 

  38. Jimenenez-Aleixandre, M. P., Rodriguez, A. B., & Duschl, R. A. (2000). “Doing the lesson” or “doing science”: Argument in high school genetics. Science Education, 84(3), 757–792.

    Google Scholar 

  39. Jordan, B., & Henderson, A. (1995). Interaction analysis: Foundations and practice. The Journal of the Learning Sciences, 4(1), 39–103.

    Google Scholar 

  40. Kelly, G. J. (2014). Inquiry teaching and learning: Philosophical considerations. In M. Matthews (Ed.), International handbook of research in history, philosophy and science teaching, (pp. 1363–1380). Dordrecht: Springer.

    Google Scholar 

  41. Kelly, G. J., McDonald, S., & Wickman, P. O. (2012). Science learning and epistemology. In K. Tobin, B. Fraser, & C. McRobbie (Eds.), Second international handbook of science education, (pp. 281–291). Dordrecht: Springer.

    Google Scholar 

  42. Kitchener, K. S. (1983). Cognition, metacognition, and epistemic cognition: A three-level model of cognitive processing. Human Development, 4, 222–232.

    Google Scholar 

  43. Knight, J. K., & Smith, M. K. (2010). Different but equal? How nonmajors and majors approach and learn genetics. CBE—Life Sciences Education, 9(1), 34–44.

    Google Scholar 

  44. Knorr Cetina, K. (1999). Epistemic cultures: How the sciences make knowledge. Cambridge: Harvard University Press.

    Google Scholar 

  45. Krist, C., Schwarz, C. V., & Reiser, B. J. (2019). Identifying essential epistemic heuristics for guiding mechanistic reasoning in science learning. Journal of the Learning Sciences, 28(2), 160–205.

    Google Scholar 

  46. Krueger, R. A., & Casey, M. A. (2014). Focus groups: A practical guide for applied research. Sage publications.

  47. Lemke, J. L. (2012). Analyzing verbal data: Principles, methods, and problems. In Second international handbook of science education, (pp. 1471–1484). Dordrecht: Springer Netherlands.

    Google Scholar 

  48. Lidar, M., Lundqvist, E., & Östman, L. (2006). Teaching and learning in the science classroom, the interplay between teachers’ epistemological moves and students’ practical epistemology. Science Education., 90(1), 148–163.

    Google Scholar 

  49. Longino, H. (1990). Science as social knowledge. New Jersey: Princeton University press.

    Google Scholar 

  50. Longino, H. (2002). The fate of knowledge. Princeton: Princeton University Press.

    Google Scholar 

  51. Luft, J. A., Kurdziel, J. P., Roehrig, G. H., & Turner, J. (2004). Growing a garden without water: Graduate teaching assistants in introductory science laboratories at a doctoral/research university. Journal of Research in Science Teaching, 41(3), 211–233.

    Google Scholar 

  52. Maclellan, E. (2015). Updating understandings of ‘teaching’: Taking account of learners’ and teachers’ beliefs. Teaching in Higher Education, 20(2), 171–182 Chicago.

    Google Scholar 

  53. Mansour, N. (2009). Science teachers’ beliefs and practices: Issues, implications and research agenda. International Journal of Environmental and Science Education, 4(1), 25–48 Chicago.

    Google Scholar 

  54. Manz, E. (2015). Representing student argumentation as functionally emergent from scientific activity. Review of Educational Research, 85(4), 553–590.

    Google Scholar 

  55. Matthews, M. R., & Matthews, M. R. (Eds.) (2014). International handbook of research in history, philosophy and science teaching. Dordrecht: Springer.

    Google Scholar 

  56. McFadden, J. (2019). Transitions in the perpetual beta of NGSS: One science teacher's beliefs and attempts for instructional change, 1-30. Journal of Science Teacher Education. https://doi.org/10.1080/1046560X.2018.1559559.

  57. Merriam, S. B. (2002). Qualitative research in practice: Examples for discussion and analysis. San Francisco: Jossey-Bass.

    Google Scholar 

  58. Merriam, S. B., & Tisdell, E. J. (2015). Qualitative research: A guide to design and implementation. San Francisco: Wiley.

    Google Scholar 

  59. Miles, M. B., & Huberman, A. M. (1994). Qualitative data analysis: An expanded sourcebook, (2nd ed., ). Thousand Oakes: Sage.

    Google Scholar 

  60. Miller, E., Manz, E., Russ, R., Stroupe, D., & Berland, L. (2018). Addressing the epistemic elephant in the room: Epistemic agency and the next generation science standards. Journal of Research in Science Teaching, 55(7), 1053–1075.

    Google Scholar 

  61. Muis, K., & Foy, M. (2010). The effects of teachers’ beliefs on elementary students’ beliefs, motivation, and achievement in mathematics. In L. Bendixen, & F. Feucht (Eds.), Personal epistemology in the classroom: Theory, research, and implications for practice, (pp. 435–469).

    Google Scholar 

  62. National Academies of Sciences, Engineering, and Medicine (2018). Indicators for monitoring undergraduate STEM education. Washington, DC: The National Academies Press.

    Google Scholar 

  63. National Research Council (2012). Discipline-based education research: Understanding and improving learning in undergraduate science and engineering. Washington, DC: The National Academies Press https://doi.org/10.17226/13362.

    Google Scholar 

  64. Niaz, M., & Maza, A. (2011). Nature of science in general chemistry textbooks. In Nature of science in general chemistry textbooks, (pp. 1–37). Dordrecht: Springer.

    Google Scholar 

  65. Park, C. (2004). The graduate teaching assistant (GTA): Lessons from north American experience. Teaching in Higher Education, 9(3), 349–361.

    Google Scholar 

  66. Patton, M. Q. (2002). Qualitative Research & Evaluation Methods. SAGE.

  67. Perry, W. G. (1970). Forms of intellectual and ethical development in the college year: A scheme. San Francisco: Jossey-Bass.

    Google Scholar 

  68. Raviv, A., Bar-Tal, D., Raviv, A., Biran, B., & Sela, Z. (2003). Teachers’ epistemic authority: Perceptions of students and teachers. Social Psychology of Education, 6(1), 17–42.

    Google Scholar 

  69. Richardson, J. T. (2013). Epistemological development in higher education. Educational Research Review, 9, 191–206.

    Google Scholar 

  70. Roehrig, G. H., Luft, J. A., Kurdziel, J. P., & Turner, J. A. (2003). Graduate teaching assistants and inquiry-based instruction: Implications for graduate teaching assistant training. Journal of Chemical Education, 80(10), 1206–1210.

    Google Scholar 

  71. Rushin, J. W., Saix, J. D., Lumsden, A., Streubel, D. P., Summers, G., & Bernson, C. (1997). Graduate teaching assistant training: A basis for improvement of college biology teaching and faculty development? American Biology Teacher, 59, 86–90.

    Google Scholar 

  72. Sandoval, W. (2014). Science education’s need for a theory of epistemological development. Science Education., 98(3), 383–387.

    Google Scholar 

  73. Sandoval, W. A. (2005). Understanding students’ practical epistemologies and their influence on learning through inquiry. Science Education, 89(4), 634–656.

    Google Scholar 

  74. Scardamalia, M., & Bereiter, C. (1991). Higher levels of agency for children in knowledge building: A challenge for the design of new knowledge media. Journal of the Learning Sciences, 1, 37–68.

    Google Scholar 

  75. Scardamalia, M., & Bereiter, C. (2006). Knowledge building. In R. K. Sawyer (Ed.), The Cambridge handbook of the learning sciences, (pp. 97–115). Cambridge: Cambridge University Press.

    Google Scholar 

  76. Schussler, E. E., Read, Q., Marbach-Ad, G., Miller, K., & Ferzli, M. (2015). Preparing biology graduate teaching assistants for their roles as instructors: An assessment of institutional approaches. CBE—Life Sciences Education, 14(3), ar31.

    Google Scholar 

  77. Stroupe, D. (2014). Examining classroom science practice communities: How teachers and students negotiate epistemic agency and learn science-as-practice. Science Education, 98, 487–516.

    Google Scholar 

  78. Sundberg, M. D., Armstrong, J. E., & Wischusen, E. W. (2005). A reappraisal of the status of introductory biology laboratory education in US colleges and universities. American Biology Teacher., 67, 525–529.

    Google Scholar 

  79. Thompson, J., Hagenah, S., Kang, H., Stroupe, D., Braaten, M., Colley, C., & Windschitl, M. (2016). Rigor and responsiveness in classroom activity. Teachers College Record, 118(5), 1–58 https://www.tcrecord.org. Accessed 26 May 2020.

    Google Scholar 

  80. Tuf, I. H., Drábková, L., & Šipoš, J. (2015). Personality affects defensive behaviour of Porcellioscaber (isopoda, Oniscidea). ZooKeys, 515, 159–171 https://doi.org/10.3897/zookeys.515.9429.

    Google Scholar 

  81. Wickman, P. O. (2004). The practical epistemologies of the classroom: A study of laboratory work. Science Education, 88(3), 325–344.

    Google Scholar 

  82. Wolcott, H. F. (1994). Transforming qualitative data: Description, analysis, and interpretation. London: Sage.

    Google Scholar 

  83. Wyse, S. A., Long, T. M., & Ebert-May, D. (2014). Teaching assistant professional development in biology: Designed for and driven by multidimensional data. CBE—Life Sciences Education, 13(2), 212–223.

    Google Scholar 

  84. Yerdelen-Damar, S., & Eryılmaz, A. (2019). Promoting conceptual understanding with explicit epistemic intervention in metacognitive instruction: Interaction between the treatment and epistemic cognition. In Research in science education, (pp. 1–29).

    Google Scholar 

Download references

Acknowledgements

This research on GTA teaching professional development was conducted as part of the BioTAP Scholars program, a component of the Biology Teaching Assistant Project (BioTAP; NSF RCN-UBE grant #DBI-1247938).

Funding

Spencer Foundation (#201700080).

Author information

Affiliations

Authors

Contributions

JM and LF co-designed the study. JM and LF collected the data. JM carried out the analysis. JM drafted the manuscript. The authors read and approved the final manuscript.

Corresponding author

Correspondence to Justin Robert McFadden.

Ethics declarations

Competing interests

JM - None to report. LF - None to report.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

McFadden, J.R., Fuselier, L. Graduate teaching assistants: sharing epistemic agency with non-science majors in the biology laboratory. Discip Interdscip Sci Educ Res 2, 7 (2020). https://doi.org/10.1186/s43031-020-00024-5

Download citation

Keywords

  • Graduate teaching assistants
  • Discipline-based education research
  • Critical contextual empiricism
  • Case study