In this activity, students will perform an experiment utilizing dialysis tubing to …
In this activity, students will perform an experiment utilizing dialysis tubing to create cellular models to demonstrate the linear relationship between cell weight and time in varying tonicities. Videos and data sets (of faculty results) are provided for
Think you're good at guessing stats? Guess again. Whether we consider ourselves …
Think you're good at guessing stats? Guess again. Whether we consider ourselves math people or not, our ability to understand and work with numbers is terribly limited, says data visualization expert Alan Smith. In this delightful talk, Smith explores the mismatch between what we know and what we think we know.
Wind surge is a JAVA based applet for exploring how water level …
Wind surge is a JAVA based applet for exploring how water level on the windward and leeward side of a basin depends on wind speed, basin length, water depth, and boundary type.
(Note: this resource was added to OER Commons as part of a batch upload of over 2,200 records. If you notice an issue with the quality of the metadata, please let us know by using the 'report' button and we will flag it for consideration.)
Students will examine graphs on education, earnings, and workforce participation for men …
Students will examine graphs on education, earnings, and workforce participation for men and women between 1940 and 2010. Specifically, students will study a graph of the percentages of managers who were women from 1940 to 2009 to understand the connection between the increase in these percentages and the feminist movement of the 1960s.
This exercise uses an analytical method (Grubb, 1993) and Excel to calculate …
This exercise uses an analytical method (Grubb, 1993) and Excel to calculate the capture zone shape for a TCE remediation well in Wooster, Ohio. The case study description given in an extensive PowerPoint presentation. The capture-zone equations are programmed by the student into an Excel worksheet and used to delineate the contributing area of a contaminant recovery well. Students can then experiment with by varying the pumping rate, hydraulic conductivity, and hydraulic gradient to better understand the sensitivity of these parameters on capture-zone shape.
(Note: this resource was added to OER Commons as part of a batch upload of over 2,200 records. If you notice an issue with the quality of the metadata, please let us know by using the 'report' button and we will flag it for consideration.)
Short Description: The word workaround has entered general English usage to refer …
Short Description: The word workaround has entered general English usage to refer to a makeshift method of overcoming or bypassing a problem, an example being the events that took place after an explosion on the Apollo 13 spacecraft in 1970. Until recently, the concept was limited primarily to technical contexts. This book explores the origins of workarounds, the challenges of identifying and managing them, and the potential benefits and risks involved. It discusses the use of workarounds in different settings and also takes a look at future challenges.
Word Count: 47179
(Note: This resource's metadata has been created automatically by reformatting and/or combining the information that the author initially provided as part of a bulk import process.)
This exercise provides students the opportunity to work with real microprobe data …
This exercise provides students the opportunity to work with real microprobe data to perform a series of common calculations. It also provides a brief glimpse into a high-pressure experiment. (I hope to expand this opportunity in the future via web activities...stay tuned.) The exercise can be used as a laboratory activity or a problem set. It is ideally suited for the use of a spreadsheet like Excel, but can be completed by hand. This is a great opportunity for students who are unfamiliar with spreadsheets to get their feet wet. For me, trial by fire is the best way to learn a new software program. The exercise could be used in any undergraduate petrology or mineralogy course and assumes only a general background in mineral chemistry. The goals are for students to: 1) work with real data from an experiment, 2) learn/remind themselves of the relationship between chemistry and crystal structure as displayed in mineral formula, 3) use a geothermometer to see how phase equilibria can be used to decipher physical properties of rocks. The exercises include: - Mineral formula recalculation - Unit cell content calculation - Calculating end-member percentage - Plotting data on a ternary plot - Geothermometer calculation The exercise could easily be modified to include other "pet" analyses or questions.
(Note: this resource was added to OER Commons as part of a batch upload of over 2,200 records. If you notice an issue with the quality of the metadata, please let us know by using the 'report' button and we will flag it for consideration.)
Sea surface temperature is a critical variable that determines biogeographic and distribution …
Sea surface temperature is a critical variable that determines biogeographic and distribution patterns of marine organisms. Changes in temperature influence species reproduction and survival and can affect the spread of invasive species spread and marine diseases. As a result SST is a vital indicator of changes in ecosystem health and understand patterns and causes of change are necessary for conservation decisions. . In a previous activity (Working with Scientific Data Sets in Matlab: An Exploration of Ocean Color and Sea Surface Temperature), you downloaded and sub-scened global, annually averaged SST data. In addition to understanding the year-to-year variability in SST patterns, it is important to understand the SST variability over shorter time scales e.g. daily, seasonal). In this activity we will work with a daily imagery to understand intra-annual variability of SST and interpolate values for where data has not been collected.
Students will learn about how the U.S. government classifies race and ethnicity. …
Students will learn about how the U.S. government classifies race and ethnicity. The teacher will play a video of students at Park East High School in New York City who contacted the U.S. Census Bureau to start a conversation about the way race and ethnicity are identified in census surveys. Students will also read a blog post explaining how the Census Bureau has changed the way it collects data on race and ethnicity. In the last part of the activity, students will write a letter that could be sent to a leader in their community with the goal of sparking some type of change.
Spreadsheets Across the Curriculum module/Geology of National Parks course. Students use foundational …
Spreadsheets Across the Curriculum module/Geology of National Parks course. Students use foundational math to study the velocity of the North American Plate over the hot spot, the volume of eruptive materials from it, and the recurrence interval of the cataclysmic eruptions.
(Note: this resource was added to OER Commons as part of a batch upload of over 2,200 records. If you notice an issue with the quality of the metadata, please let us know by using the 'report' button and we will flag it for consideration.)
Today we’re going to talk about Bayes Theorem and Bayesian hypothesis testing. …
Today we’re going to talk about Bayes Theorem and Bayesian hypothesis testing. Bayesian methods like these are different from how we've been approaching statistics so far, because they allow us to update our beliefs as we gather new information - which is how we tend to think naturally about the world. And this can be a really powerful tool, since it allows us to incorporate both scientifically rigorous data AND our previous biases into our evolving opinions.
CORRECTION: At the righthand side of the equation should not have P()'s, it should just be the raw numbers.
Today we’re going to talk about how we compare things that aren’t …
Today we’re going to talk about how we compare things that aren’t exactly the same - or aren’t measured in the same way. For example, if you wanted to know if a 1200 on the SAT is better than the 25 on the ACT. For this, we need to standardize our data using z-scores - which allow us to make comparisons between two sets of data as long as they’re normally distributed. We’ll also talk about converting these scores to percentiles and discuss how percentiles, though valuable, don’t actually tell us how “extreme” our data really is.
In this lesson, students will learn to find and use z-scores to …
In this lesson, students will learn to find and use z-scores to compare data. Through videos and interactive questions with immediate feedback they can practice the basics of z-score usage.
Short Description: For immediate access, this book may be downloaded in PDF …
Short Description: For immediate access, this book may be downloaded in PDF format. The original LaTeX files may be downloaded from the GitHub repository.
Word Count: 39132
(Note: This resource's metadata has been created automatically by reformatting and/or combining the information that the author initially provided as part of a bulk import process.)
This activity entails a basic morphometrics lab, followed up by an in-class …
This activity entails a basic morphometrics lab, followed up by an in-class exercise to reinforce some of the same key concepts. The lab exercise familiarizes the student with basic methods of quantitative characterization and statistical comparison through measurement of pygidia (tails) of two species of the Ordovician trilobite Bellefontia -- one from New York and one from Pennsylvania. Actual specimens, while nice, are not required; data acquired by measurement from photo collages will suffice. The exercise culminates in a statistical test of significance (using the Z-statistic) of the difference in slopes of the lines acquired for data from the two species. The data also serve to pose questions and prompt consideration of growth trajectories and discrimination of isometric from anisometric growth. The in-class activity builds on the knowledge base built in the lab but applies it to species discrimination based on the cranidia (central part of the head) of three species of the Upper Cambrian genus Bartonaspis, known to be of identical age from their occurrences within the very thin (everywhere 2m or less) Irvingella major Zone of the Elvinia trilobite Zone. The importance of that subzone, which is the "critical interval" at the top of the Pterocephaliid Biomere the basal unit of the Sunwaptan Stage traceable throughout Laurentian North America, also contributes to the significance of the exercise. With the insight developed from the lab, students are able to confidently distinguish the three species of Bartonaspis (from three photo collages), but must thoughtfully evaluate the data presented in bivariate plots of cranidial morphologic data to do so. The exercise gives the students a good sense of the level of familiarity and morphologic characterization necessary to do species-level identification, and also some worthwhile practice in basic quantitative methods.
(Note: this resource was added to OER Commons as part of a batch upload of over 2,200 records. If you notice an issue with the quality of the metadata, please let us know by using the 'report' button and we will flag it for consideration.)
It contains binomial distribution, poisson distribution along with discrete uniform distributions with …
It contains binomial distribution, poisson distribution along with discrete uniform distributions with their formulas and thier properties with solved examples
The widespread use of ‘statistical significance’ as a license for making a …
The widespread use of ‘statistical significance’ as a license for making a claim of a scientific finding leads to considerable distortion of the scientific process (according to the American Statistical Association). We review why degrading p-values into ‘significant’ and ‘nonsignificant’ contributes to making studies irreproducible, or to making them seem irreproducible. A major problem is that we tend to take small p-values at face value, but mistrust results with larger p-values. In either case, p-values tell little about reliability of research, because they are hardly replicable even if an alternative hypothesis is true. Also significance (p ≤ 0.05) is hardly replicable: at a good statistical power of 80%, two studies will be ‘conflicting’, meaning that one is significant and the other is not, in one third of the cases if there is a true effect. A replication can therefore not be interpreted as having failed only because it is nonsignificant. Many apparent replication failures may thus reflect faulty judgment based on significance thresholds rather than a crisis of unreplicable research. Reliable conclusions on replicability and practical importance of a finding can only be drawn using cumulative evidence from multiple independent studies. However, applying significance thresholds makes cumulative knowledge unreliable. One reason is that with anything but ideal statistical power, significant effect sizes will be biased upwards. Interpreting inflated significant results while ignoring nonsignificant results will thus lead to wrong conclusions. But current incentives to hunt for significance lead to selective reporting and to publication bias against nonsignificant findings. Data dredging, p-hacking, and publication bias should be addressed by removing fixed significance thresholds. Consistent with the recommendations of the late Ronald Fisher, p-values should be interpreted as graded measures of the strength of evidence against the null hypothesis. Also larger p-values offer some evidence against the null hypothesis, and they cannot be interpreted as supporting the null hypothesis, falsely concluding that ‘there is no effect’. Information on possible true effect sizes that are compatible with the data must be obtained from the point estimate, e.g., from a sample average, and from the interval estimate, such as a confidence interval. We review how confusion about interpretation of larger p-values can be traced back to historical disputes among the founders of modern statistics. We further discuss potential arguments against removing significance thresholds, for example that decision rules should rather be more stringent, that sample sizes could decrease, or that p-values should better be completely abandoned. We conclude that whatever method of statistical inference we use, dichotomous threshold thinking must give way to non-automated informed judgment.
Tutorials on logistic regression and mediation analysis for an advanced undergraduate/graduate psychological …
Tutorials on logistic regression and mediation analysis for an advanced undergraduate/graduate psychological statistics course. Tutorials are written in learnr. Each includes videos, quizzes, demonstrations of R code, and exercises that allow users to run R code.
Poor research design and data analysis encourage false-positive findings. Such poor methods …
Poor research design and data analysis encourage false-positive findings. Such poor methods persist despite perennial calls for improvement, suggesting that they result from something more than just misunderstanding. The persistence of poor methods results partly from incentives that favour them, leading to the natural selection of bad science. This dynamic requires no conscious strategizing—no deliberate cheating nor loafing—by scientists, only that publication is a principal factor for career advancement. Some normative methods of analysis have almost certainly been selected to further publication instead of discovery. In order to improve the culture of science, a shift must be made away from correcting misunderstandings and towards rewarding understanding. We support this argument with empirical evidence and computational modelling. We first present a 60-year meta-analysis of statistical power in the behavioural sciences and show that power has not improved despite repeated demonstrations of the necessity of increasing power. To demonstrate the logical consequences of structural incentives, we then present a dynamic model of scientific communities in which competing laboratories investigate novel or previously published hypotheses using culturally transmitted research methods. As in the real world, successful labs produce more ‘progeny,’ such that their methods are more often copied and their students are more likely to start labs of their own. Selection for high output leads to poorer methods and increasingly high false discovery rates. We additionally show that replication slows but does not stop the process of methodological deterioration. Improving the quality of research requires change at the institutional level.
(Nota: Esta es una traducción de un recurso educativo abierto creado por …
(Nota: Esta es una traducción de un recurso educativo abierto creado por el Departamento de Educación del Estado de Nueva York (NYSED) como parte del proyecto "EngageNY" en 2013. Aunque el recurso real fue traducido por personas, la siguiente descripción se tradujo del inglés original usando Google Translate para ayudar a los usuarios potenciales a decidir si se adapta a sus necesidades y puede contener errores gramaticales o lingüísticos. La descripción original en inglés también se proporciona a continuación.)
En este módulo, los estudiantes reconectan y profundizan su comprensión de las estadísticas y los conceptos de probabilidad introducidos por primera vez en los grados 6, 7 y 8. Los estudiantes desarrollan un conjunto de herramientas para comprender e interpretar la variabilidad en los datos, y comienzan a tomar decisiones más informadas de los datos . Trabajan con distribuciones de datos de varias formas, centros y diferenciales. Los estudiantes se basan en su experiencia con datos cuantitativos bivariados del grado 8. Este módulo prepara el escenario para un trabajo más extenso con muestreo e inferencia en calificaciones posteriores.
Encuentre el resto de los recursos matemáticos de Engageny en https://archive.org/details/engageny-mathematics.
English Description: In this module, students reconnect with and deepen their understanding of statistics and probability concepts first introduced in Grades 6, 7, and 8. Students develop a set of tools for understanding and interpreting variability in data, and begin to make more informed decisions from data. They work with data distributions of various shapes, centers, and spreads. Students build on their experience with bivariate quantitative data from Grade 8. This module sets the stage for more extensive work with sampling and inference in later grades.
Find the rest of the EngageNY Mathematics resources at https://archive.org/details/engageny-mathematics.
No restrictions on your remixing, redistributing, or making derivative works. Give credit to the author, as required.
Your remixing, redistributing, or making derivatives works comes with some restrictions, including how it is shared.
Your redistributing comes with some restrictions. Do not remix or make derivative works.
Most restrictive license type. Prohibits most uses, sharing, and any changes.
Copyrighted materials, available under Fair Use and the TEACH Act for US-based educators, or other custom arrangements. Go to the resource provider to see their individual restrictions.