898 resultados para Two Approaches
Resumo:
Current views of the nature of knowledge and of learning suggest that instructional approaches in science education pay closer attention to how students learn rather than on teaching. This study examined the use of approaches to teaching science based on two contrasting perspectives in learning, social constructivist and traditional, and the effects they have on students' attitudes and achievement. Four categories of attitudes were measured using the Upper Secondary Attitude Questionnaire: Attitude towards school, towards the importance of science, towards science as a career, and towards science as a subject in school. Achievement was measured by average class grades and also with a researcher/teacher constructed 30-item test that involved three sub-scales of items based on knowledge, and applications involving near-transfer and far-transfer of concepts. The sample consisted of 202 students in nine intact classrooms in chemistry at a large high school in Miami, Florida, and involved two teachers. Results were analyzed using a two-way analysis of covariance (ANCOVA) with a pretest in attitude as the covariate for attitudes and prior achievement as the covariate for achievement. A comparison of the adjusted mean scores was made between the two groups and between females and males. ^ With constructivist-based teaching, students showed more favorable attitude towards science as a subject, obtained significantly higher scores in class achievement, total achievement and achievement on the knowledge sub-scale of the knowledge and application test. Students in the traditional group showed more favorable attitude towards school. Females showed significantly more positive attitude towards the importance of science and obtained significantly higher scores in class achievement. No significant interaction effects were obtained for method of instruction by gender. ^ This study lends some support to the view that constructivist-based approaches to teaching science is a viable alternative to traditional modes of teaching. It is suggested that in science education, more consideration be given to those aspects of classroom teaching that foster closer coordination between social influences and individual learning. ^
Resumo:
This study explored the relative value of behavioral and cognitive psychology as the basis of instruction for underprepared college students enrolled in developmental reading courses. Specifically this study examined the effects of a metacognitive strategy-based instructional approach (MSIA) modeling a metacognitive self-questioning technique (MSQT) versus a traditional skills-based instructional approach (SIA) on the Nelson-Denny reading comprehension scores of college developmental readers and whether there were significant differences in achievement based on instructional method used and on the sex of students. The sample consisted of 100 college developmental reading students who were enrolled in six intact sections of a reading course (REA0002). Participants completed a pretest of the comprehension subtest of the Nelson-Denny Reading Test (Form G). Three of these classes (n = 49) were taught using metacognitive-strategy instruction and three classes (n = 51) were instructed using skills-based instruction. They then received a semester of instruction intended to improve their reading comprehension. At the end of the semester, participants completed a post-test of the Nelson-Denny Reading Comprehension Test (Form H). A two (Between) x one (Within) Repeated Measures Analysis of Variance (ANOVA) was utilized to test each of the hypotheses of this study. Results showed that there were no significant differences in reading comprehension between the groups receiving the different instructional treatments and no differences in reading comprehension between the men and women participants. Based on the findings, implications for research and recommendations for future research were discussed.
Resumo:
Students with emotional and/or behavioral disorders (EBD)present considerable academic challenges along with emotional and/or behavioral problems. In terms of reading, these students typically perform one-to-two years below grade level (Kauffman, 2001). Given the strong correlation between reading failure and school failure and overall success (Scott & Shearer-Lingo, 2002), finding effective approaches to reading instruction is imperative for these students (Staubitz, Cartledge, Yurick, & Lo, 2005). This study used an alternating treatments design to comparethe effects of three conditions on the reading fluency, errors, and comprehension of four, sixth-grade students with EBD who were struggling readers. Specifically, the following were compared: (a) Repeated readings in which participants repeatedly read a passage of about 100-150 words, three times, (b) Non-repeated readings in which participants sequentially read an original passage of about 100-150 words once, and (c) Equivalent non-repeated readings in which participants sequentially read a passage of about 300-450 words, equivalent to the number of words in the repeated readings condition. Also examined were the effects of the three repeated readings practice trials per sessions on reading fluency and errors. The reading passage difficulty and length established prior to commencing were used for all participants throughout the standard phase. During the enhanced phase, the reading levels were increased 6 months for all participants, and for two (the advanced readers), the length of the reading passages was increased by 50%, allowing for comparisons under more rigorous conditions. The results indicate that overall repeated readings had the best outcome across the standard and enhanced phases for increasing readers’ fluency, reducing their errors per minute, and supporting fluency answers to literal comprehension questions correctly as compared to non-repeated and equivalent non-repeated conditions. When comparing nonrepeated and equivalent non-repeated readings,there were mixed results. Under the enhanced phases, the positive effects of repeated readings were more demonstrative. Additional research is needed to compare the effects of repeated and equivalent non-repeated readings across other populations of students with disabilities or varying learning styles. This research should include collecting repeated readings practice trial data for fluency and errors to further analyze the immediate effects of repeatedly reading a passage.
Resumo:
The accurate and reliable estimation of travel time based on point detector data is needed to support Intelligent Transportation System (ITS) applications. It has been found that the quality of travel time estimation is a function of the method used in the estimation and varies for different traffic conditions. In this study, two hybrid on-line travel time estimation models, and their corresponding off-line methods, were developed to achieve better estimation performance under various traffic conditions, including recurrent congestion and incidents. The first model combines the Mid-Point method, which is a speed-based method, with a traffic flow-based method. The second model integrates two speed-based methods: the Mid-Point method and the Minimum Speed method. In both models, the switch between travel time estimation methods is based on the congestion level and queue status automatically identified by clustering analysis. During incident conditions with rapidly changing queue lengths, shock wave analysis-based refinements are applied for on-line estimation to capture the fast queue propagation and recovery. Travel time estimates obtained from existing speed-based methods, traffic flow-based methods, and the models developed were tested using both simulation and real-world data. The results indicate that all tested methods performed at an acceptable level during periods of low congestion. However, their performances vary with an increase in congestion. Comparisons with other estimation methods also show that the developed hybrid models perform well in all cases. Further comparisons between the on-line and off-line travel time estimation methods reveal that off-line methods perform significantly better only during fast-changing congested conditions, such as during incidents. The impacts of major influential factors on the performance of travel time estimation, including data preprocessing procedures, detector errors, detector spacing, frequency of travel time updates to traveler information devices, travel time link length, and posted travel time range, were investigated in this study. The results show that these factors have more significant impacts on the estimation accuracy and reliability under congested conditions than during uncongested conditions. For the incident conditions, the estimation quality improves with the use of a short rolling period for data smoothing, more accurate detector data, and frequent travel time updates.
Resumo:
Chromium (Cr) is a metal of particular environmental concern, owing to its toxicity and widespread occurrence in groundwater, soil, and soil solution. A combination of hydrological, geochemical, and microbiological processes governs the subsurface migration of Cr. Little effort has been devoted to examining how these biogeochemical reactions combine with hydrologic processes influence Cr migration. This study has focused on the complex problem of predicting the Cr transport in laboratory column experiments. A 1-D reactive transport model was developed and evaluated against data obtained from laboratory column experiments. ^ A series of dynamic laboratory column experiments were conducted under abiotic and biotic conditions. Cr(III) was injected into columns packed with β-MnO 2-coated sand at different initial concentrations, variable flow rates, and at two different pore water pH (3.0 and 4.0). In biotic anaerobic column experiments Cr(VI) along with lactate was injected into columns packed with quartz sand or β-MnO2-coated sand and bacteria, Shewanella alga Simidu (BrY-MT). A mathematical model was developed which included advection-dispersion equations for the movement of Cr(III), Cr(VI), dissolved oxygen, lactate, and biomass. The model included first-order rate laws governing the adsorption of each Cr species and lactate. The equations for transport and adsorption were coupled with nonlinear equations for rate-limited oxidation-reduction reactions along with dual-monod kinetic equations. Kinetic batch experiments were conducted to determine the reduction of Cr(VI) by BrY-MT in three different substrates. Results of the column experiments with Cr(III)-containing influent solutions demonstrate that β-MnO2 effectively catalyzes the oxidation of Cr(III) to Cr(VI). For a given influent concentration and pore water velocity, oxidation rates are higher, and hence effluent concentrations of Cr(VI) are greater, at pH 4 relative to pH 3. Reduction of Cr(VI) by BrY-MT was rapid (within one hour) in columns packed with quartz sand, whereas Cr(VI) reduction by BrY-MT was delayed (57 hours) in presence of β-MnO 2-coated sand. BrY-MT grown in BHIB (brain heart infusion broth) reduced maximum amount of Cr(VI) to Cr(III) followed by TSB (tryptic soy broth) and M9 (minimum media). The comparisons of data and model results from the column experiments show that the depths associated with Cr(III) oxidation and transport within sediments of shallow aquatic systems can strongly influence trends in surface water quality. The results of this study suggests that carefully performed, laboratory column experiments is a useful tool in determining the biotransformation of redox-sensitive metals even in the presence of strong oxidant, like β-MnO2. ^
Resumo:
Physical therapy students must apply the relevant information learned in their academic and clinical experience to problem solve in treating patients. I compared the clinical cognitive competence in patient care of second-year masters students enrolled in two different curricular programs: modified problem-based (M P-B; n = 27) and subject-centered (S-C; n = 41). Main features of S-C learning include lecture and demonstration as the major teaching strategies and no exposure to patients or problem solving learning until the sciences (knowledge) have been taught. Comparatively, main features of M P-B learning include case study in small student groups as the main teaching strategy, early and frequent exposure to patients, and knowledge and problem solving skills learned together for each specific case. Basic and clinical orthopedic knowledge was measured with a written test with open-ended items. Problem solving skills were measured with a written case study patient problem test yielding three subscores: assessment, problem identification, and treatment planning. ^ Results indicated that among the demographic and educational characteristics analyzed, there was a significant difference between groups on ethnicity, bachelor degree type, admission GPA, and current GPA, but there was no significant difference on gender, age, possession of a physical therapy assistant license, and GRE score. In addition, the M P-B group achieved a significantly higher adjusted mean score on the orthopedic knowledge test after controlling for GRE scores. The S-C group achieved a significantly higher adjusted mean total score and treatment management subscore on the case study test after controlling for orthopedic knowledge test scores. These findings did not support their respective research hypotheses. There was no significant difference between groups on the assessment and problem identification subscores of the case study test. The integrated M P-B approach promoted superior retention of basic and clinical science knowledge. The results on problem solving skills were mixed. The S-C approach facilitated superior treatment planning skills, but equivalent patient assessment and problem identification skills by emphasizing all equally and exposing the students to more patients with a wider variety of orthopedic physical therapy needs than in the M P-B approach. ^
Resumo:
Conjugated polymers (CPs) are intrinsically fluorescent materials that have been used for various biological applications including imaging, sensing, and delivery of biologically active substances. The synthetic control over flexibility and biodegradability of these materials aids the understanding of the structure-function relationships among the photophysical properties, the self-assembly behaviors of the corresponding conjugated polymer nanoparticles (CPNs), and the cellular behaviors of CPNs, such as toxicity, cellular uptake mechanisms, and sub-cellular localization patterns. Synthetic approaches towards two classes of flexible CPs with well-preserved fluorescent properties are described. The synthesis of flexible poly(p-phenylenebutadiynylene)s (PPBs) uses competing Sonogashira and Glaser coupling reactions and the differences in monomer reactivity to incorporate a small amount (~10%) of flexible, non-conjugated linkers into the backbone. The reaction conditions provide limited control over the proportion of flexible monomer incorporation. Improved synthetic control was achieved in a series of flexible poly(p-phenyleneethynylene)s (PPEs) using modified Sonogashira conditions. In addition to controlling the degree of flexibility, the linker provides disruption of backbone conjugation that offers control of the length of conjugated segments within the polymer chain. Therefore, such control also results in the modulation of the photophysical properties of the materials. CPNs fabricated from flexible PPBs are non-toxic to cells, and exhibit subcellular localization patterns clearly different from those observed with non-flexible PPE CPNs. The subcellular localization patterns of the flexible PPEs have not yet been determined, due to the toxicity of the materials, most likely related to the side-chain structure used in this series. The study of the effect of CP flexibility on self-assembly reorganization upon polyanion complexation is presented. Owing to its high rigidity and hydrophobicity, the PPB backbone undergoes reorganization more readily than PPE. The effects are enhanced in the presence of the flexible linker, which enables more efficient π-π stacking of the aromatic backbone segments. Flexibility has minimal effects on the self-assembly of PPEs. Understanding the role of flexibility on the biophysical behaviors of CPNs is key to the successful development of novel efficient fluorescent therapeutic delivery vehicles.
Resumo:
The successful performance of a hydrological model is usually challenged by the quality of the sensitivity analysis, calibration and uncertainty analysis carried out in the modeling exercise and subsequent simulation results. This is especially important under changing climatic conditions where there are more uncertainties associated with climate models and downscaling processes that increase the complexities of the hydrological modeling system. In response to these challenges and to improve the performance of the hydrological models under changing climatic conditions, this research proposed five new methods for supporting hydrological modeling. First, a design of experiment aided sensitivity analysis and parameterization (DOE-SAP) method was proposed to investigate the significant parameters and provide more reliable sensitivity analysis for improving parameterization during hydrological modeling. The better calibration results along with the advanced sensitivity analysis for significant parameters and their interactions were achieved in the case study. Second, a comprehensive uncertainty evaluation scheme was developed to evaluate three uncertainty analysis methods, the sequential uncertainty fitting version 2 (SUFI-2), generalized likelihood uncertainty estimation (GLUE) and Parameter solution (ParaSol) methods. The results showed that the SUFI-2 performed better than the other two methods based on calibration and uncertainty analysis results. The proposed evaluation scheme demonstrated that it is capable of selecting the most suitable uncertainty method for case studies. Third, a novel sequential multi-criteria based calibration and uncertainty analysis (SMC-CUA) method was proposed to improve the efficiency of calibration and uncertainty analysis and control the phenomenon of equifinality. The results showed that the SMC-CUA method was able to provide better uncertainty analysis results with high computational efficiency compared to the SUFI-2 and GLUE methods and control parameter uncertainty and the equifinality effect without sacrificing simulation performance. Fourth, an innovative response based statistical evaluation method (RESEM) was proposed for estimating the uncertainty propagated effects and providing long-term prediction for hydrological responses under changing climatic conditions. By using RESEM, the uncertainty propagated from statistical downscaling to hydrological modeling can be evaluated. Fifth, an integrated simulation-based evaluation system for uncertainty propagation analysis (ISES-UPA) was proposed for investigating the effects and contributions of different uncertainty components to the total propagated uncertainty from statistical downscaling. Using ISES-UPA, the uncertainty from statistical downscaling, uncertainty from hydrological modeling, and the total uncertainty from two uncertainty sources can be compared and quantified. The feasibility of all the methods has been tested using hypothetical and real-world case studies. The proposed methods can also be integrated as a hydrological modeling system to better support hydrological studies under changing climatic conditions. The results from the proposed integrated hydrological modeling system can be used as scientific references for decision makers to reduce the potential risk of damages caused by extreme events for long-term water resource management and planning.
Resumo:
Omnibus tests of significance in contingency tables use statistics of the chi-square type. When the null is rejected, residual analyses are conducted to identify cells in which observed frequencies differ significantly from expected frequencies. Residual analyses are thus conditioned on a significant omnibus test. Conditional approaches have been shown to substantially alter type I error rates in cases involving t tests conditional on the results of a test of equality of variances, or tests of regression coefficients conditional on the results of tests of heteroscedasticity. We show that residual analyses conditional on a significant omnibus test are also affected by this problem, yielding type I error rates that can be up to 6 times larger than nominal rates, depending on the size of the table and the form of the marginal distributions. We explored several unconditional approaches in search for a method that maintains the nominal type I error rate and found out that a bootstrap correction for multiple testing achieved this goal. The validity of this approach is documented for two-way contingency tables in the contexts of tests of independence, tests of homogeneity, and fitting psychometric functions. Computer code in MATLAB and R to conduct these analyses is provided as Supplementary Material.
Resumo:
The standard difference model of two-alternative forced-choice (2AFC) tasks implies that performance should be the same when the target is presented in the first or the second interval. Empirical data often show “interval bias” in that percentage correct differs significantly when the signal is presented in the first or the second interval. We present an extension of the standard difference model that accounts for interval bias by incorporating an indifference zone around the null value of the decision variable. Analytical predictions are derived which reveal how interval bias may occur when data generated by the guessing model are analyzed as prescribed by the standard difference model. Parameter estimation methods and goodness-of-fit testing approaches for the guessing model are also developed and presented. A simulation study is included whose results show that the parameters of the guessing model can be estimated accurately. Finally, the guessing model is tested empirically in a 2AFC detection procedure in which guesses were explicitly recorded. The results support the guessing model and indicate that interval bias is not observed when guesses are separated out.
Resumo:
The rise of the twenty-first century has seen the further increase in the industrialization of Earth’s resources, as society aims to meet the needs of a growing population while still protecting our environmental and natural resources. The advent of the industrial bioeconomy – which encompasses the production of renewable biological resources and their conversion into food, feed, and bio-based products – is seen as an important step in transition towards sustainable development and away from fossil fuels. One sector of the industrial bioeconomy which is rapidly being expanded is the use of biobased feedstocks in electricity production as an alternative to coal, especially in the European Union.
As bioeconomy policies and objectives increasingly appear on political agendas, there is a growing need to quantify the impacts of transitioning from fossil fuel-based feedstocks to renewable biological feedstocks. Specifically, there is a growing need to conduct a systems analysis and potential risks of increasing the industrial bioeconomy, given that the flows within it are inextricably linked. Furthermore, greater analysis is needed into the consequences of shifting from fossil fuels to renewable feedstocks, in part through the use of life cycle assessment modeling to analyze impacts along the entire value chain.
To assess the emerging nature of the industrial bioeconomy, three objectives are addressed: (1) quantify the global industrial bioeconomy, linking the use of primary resources with the ultimate end product; (2) quantify the impacts of the expaning wood pellet energy export market of the Southeastern United States; (3) conduct a comparative life cycle assessment, incorporating the use of dynamic life cycle assessment, of replacing coal-fired electricity generation in the United Kingdom with wood pellets that are produced in the Southeastern United States.
To quantify the emergent industrial bioeconomy, an empirical analysis was undertaken. Existing databases from multiple domestic and international agencies was aggregated and analyzed in Microsoft Excel to produce a harmonized dataset of the bioeconomy. First-person interviews, existing academic literature, and industry reports were then utilized to delineate the various intermediate and end use flows within the bioeconomy. The results indicate that within a decade, the industrial use of agriculture has risen ten percent, given increases in the production of bioenergy and bioproducts. The underlying resources supporting the emergent bioeconomy (i.e., land, water, and fertilizer use) were also quantified and included in the database.
Following the quantification of the existing bioeconomy, an in-depth analysis of the bioenergy sector was conducted. Specifically, the focus was on quantifying the impacts of the emergent wood pellet export sector that has rapidly developed in recent years in the Southeastern United States. A cradle-to-gate life cycle assessment was conducted in order to quantify supply chain impacts from two wood pellet production scenarios: roundwood and sawmill residues. For reach of the nine impact categories assessed, wood pellet production from sawmill residues resulted in higher values, ranging from 10-31% higher.
The analysis of the wood pellet sector was then expanded to include the full life cycle (i.e., cradle-to-grave). In doing to, the combustion of biogenic carbon and the subsequent timing of emissions were assessed by incorporating dynamic life cycle assessment modeling. Assuming immediate carbon neutrality of the biomass, the results indicated an 86% reduction in global warming potential when utilizing wood pellets as compared to coal for electricity production in the United Kingdom. When incorporating the timing of emissions, wood pellets equated to a 75% or 96% reduction in carbon dioxide emissions, depending upon whether the forestry feedstock was considered to be harvested or planted in year one, respectively.
Finally, a policy analysis of renewable energy in the United States was conducted. Existing coal-fired power plants in the Southeastern United States were assessed in terms of incorporating the co-firing of wood pellets. Co-firing wood pellets with coal in existing Southeastern United States power stations would result in a nine percent reduction in global warming potential.
Resumo:
Burn injuries in the United States account for over one million hospital admissions per year, with treatment estimated at four billion dollars. Of severe burn patients, 30-90% will develop hypertrophic scars (HSc). Current burn therapies rely upon the use of bioengineered skin equivalents (BSEs), which assist in wound healing but do not prevent HSc. HSc contraction occurs of 6-18 months and results in the formation of a fixed, inelastic skin deformity, with 60% of cases occurring across a joint. HSc contraction is characterized by abnormally high presence of contractile myofibroblasts which normally apoptose at the completion of the proliferative phase of wound healing. Additionally, clinical observation suggests that the likelihood of HSc is increased in injuries with a prolonged immune response. Given the pathogenesis of HSc, we hypothesize that BSEs should be designed with two key anti-scarring characterizes: (1) 3D architecture and surface chemistry to mitigate the inflammatory microenvironment and decrease myofibroblast transition; and (2) using materials which persist in the wound bed throughout the remodeling phase of repair. We employed electrospinning and 3D printing to generate scaffolds with well-controlled degradation rate, surface coatings, and 3D architecture to explore our hypothesis through four aims.
In the first aim, we evaluate the impact of elastomeric, randomly-oriented biostable polyurethane (PU) scaffold on HSc-related outcomes. In unwounded skin, native collagen is arranged randomly, elastin fibers are abundant, and myofibroblasts are absent. Conversely, in scar contractures, collagen is arranged in linear arrays and elastin fibers are few, while myofibroblast density is high. Randomly oriented collagen fibers native to the uninjured dermis encourage random cell alignment through contact guidance and do not transmit as much force as aligned collagen fibers. However, the linear ECM serves as a system for mechanotransduction between cells in a feed-forward mechanism, which perpetuates ECM remodeling and myofibroblast contraction. The electrospinning process allowed us to create scaffolds with randomly-oriented fibers that promote random collagen deposition and decrease myofibroblast formation. Compared to an in vitro HSc contraction model, fibroblast-seeded PU scaffolds significantly decreased matrix and myofibroblast formation. In a murine HSc model, collagen coated PU (ccPU) scaffolds significantly reduced HSc contraction as compared to untreated control wounds and wounds treated with the clinical standard of care. The data from this study suggest that electrospun ccPU scaffolds meet the requirements to mitigate HSc contraction including: reduction of in vitro HSc related outcomes, diminished scar stiffness, and reduced scar contraction. While clinical dogma suggests treating severe burn patients with rapidly biodegrading skin equivalents, these data suggest that a more long-term scaffold may possess merit in reducing HSc.
In the second aim, we further investigate the impact of scaffold longevity on HSc contraction by studying a degradable, elastomeric, randomly oriented, electrospun micro-fibrous scaffold fabricated from the copolymer poly(l-lactide-co-ε-caprolactone) (PLCL). PLCL scaffolds displayed appropriate elastomeric and tensile characteristics for implantation beneath a human skin graft. In vitro analysis using normal human dermal fibroblasts (NHDF) demonstrated that PLCL scaffolds decreased myofibroblast formation as compared to an in vitro HSc contraction model. Using our murine HSc contraction model, we found that HSc contraction was significantly greater in animals treated with standard of care, Integra, as compared to those treated with collagen coated-PLCL (ccPLCL) scaffolds at d 56 following implantation. Finally, wounds treated with ccPLCL were significantly less stiff than control wounds at d 56 in vivo. Together, these data further solidify our hypothesis that scaffolds which persist throughout the remodeling phase of repair represent a clinically translatable method to prevent HSc contraction.
In the third aim, we attempt to optimize cell-scaffold interactions by employing an anti-inflammatory coating on electrospun PLCL scaffolds. The anti-inflammatory sub-epidermal glycosaminoglycan, hyaluronic acid (HA) was used as a coating material for PLCL scaffolds to encourage a regenerative healing phenotype. To minimize local inflammation, an anti-TNFα monoclonal antibody (mAB) was conjugated to the HA backbone prior to PLCL coating. ELISA analysis confirmed mAB activity following conjugation to HA (HA+mAB), and following adsorption of HA+mAB to the PLCL backbone [(HA+mAB)PLCL]. Alican blue staining demonstrated thorough HA coating of PLCL scaffolds using pressure-driven adsorption. In vitro studies demonstrated that treatment with (HA+mAB)PLCL prevented downstream inflammatory events in mouse macrophages treated with soluble TNFα. In vivo studies using our murine HSc contraction model suggested positive impact of HA coating, which was partiall impeded by the inclusion of the TNFα mAB. Further characterization of the inflammatory microenvironment of our murine model is required prior to conclusions regarding the potential for anti-TNFα therapeutics for HSc. Together, our data demonstrate the development of a complex anti-inflammatory coating for PLCL scaffolds, and the potential impact of altering the ECM coating material on HSc contraction.
In the fourth aim, we investigate how scaffold design, specifically pore dimensions, can influence myofibroblast interactions and subsequent formation of OB-cadherin positive adherens junctions in vitro. We collaborated with Wake Forest University to produce 3D printed (3DP) scaffolds with well-controlled pore sizes we hypothesized that decreasing pore size would mitigate intra-cellular communication via OB-cadherin-positive adherens junctions. PU was 3D printed via pressure extrusion in basket-weave design with feature diameter of ~70 µm and pore sizes of 50, 100, or 150 µm. Tensile elastic moduli of 3DP scaffolds were similar to Integra; however, flexural moduli of 3DP were significantly greater than Integra. 3DP scaffolds demonstrated ~50% porosity. 24 h and 5 d western blot data demonstrated significant increases in OB-cadherin expression in 100 µm pores relative to 50 µm pores, suggesting that pore size may play a role in regulating cell-cell communication. To analyze the impact of pore size in these scaffolds on scarring in vivo, scaffolds were implanted beneath skin graft in a murine HSc model. While flexural stiffness resulted in graft necrosis by d 14, cellular and blood vessel integration into scaffolds was evident, suggesting potential for this design if employed in a less stiff material. In this study, we demonstrate for the first time that pore size alone impacts OB-cadherin protein expression in vitro, suggesting that pore size may play a role on adherens junction formation affiliated with the fibroblast-to-myofibroblast transition. Overall, this work introduces a new bioengineered scaffold design to both study the mechanism behind HSc and prevent the clinical burden of this contractile disease.
Together, these studies inform the field of critical design parameters in scaffold design for the prevention of HSc contraction. We propose that scaffold 3D architectural design, surface chemistry, and longevity can be employed as key design parameters during the development of next generation, low-cost scaffolds to mitigate post-burn hypertrophic scar contraction. The lessening of post-burn scarring and scar contraction would improve clinical practice by reducing medical expenditures, increasing patient survival, and dramatically improving quality of life for millions of patients worldwide.
Resumo:
The commodification of natural resources and the pursuit of continuous growth has resulted in environmental degradation, depletion, and disparity in access to these life-sustaining resources, including water. Utility-based objectification and exploitation of water in some societies has brought us to the brink of crisis through an apathetic disregard for present and future generations. The ongoing depletion and degradation of the world’s water sources, coupled with a reliance on Western knowledge and the continued omission of Indigenous knowledge to manage our relationship with water has unduly burdened many, but particularly so for Indigenous communities. The goal of my thesis research is to call attention to and advance the value and validity of using both Indigenous and Western knowledge systems (also known as Two-Eyed Seeing) in water research and management to better care for water. To achieve this goal, I used a combined systematic and realist review method to identify and synthesize the peer-reviewed, integrative water literature, followed by semi-structured interviews with first authors of the exemplars from the included literature to identify the challenges and insights that researchers have experienced in conducting integrative water research. Findings suggest that these authors recognize that many previous attempts to integrate Indigenous knowledges have been tokenistic rather than meaningful, and that new methods for knowledge implementation are needed. Community-based participatory research methods, and the associated tenets of balancing power, fostering trust, and community ownership over the research process, emerged as a pathway towards the meaningful implementation of Indigenous and Western knowledge systems. Data also indicate that engagement and collaborative governance structures developed from a position of mutual respect are integral to the realization of a given project. The recommendations generated from these findings offer support for future Indigenous-led research and partnerships through the identification and examination of approaches that facilitate the meaningful implementation of Indigenous and Western knowledge systems in water research and management. Asking Western science questions and seeking Indigenous science solutions does not appear to be working; instead, the co-design of research projects and asking questions directed at the problem rather than the solution better lends itself to the strengths of Indigenous science.
Resumo:
In the past decade, several major food safety crises originated from problems with feed. Consequently, there is an urgent need for early detection of fraudulent adulteration and contamination in the feed chain. Strategies are presented for two specific cases, viz. adulterations of (i) soybean meal with melamine and other types of adulterants/contaminants and (ii) vegetable oils with mineral oil, transformer oil or other oils. These strategies comprise screening at the feed mill or port of entry with non-destructive spectroscopic methods (NIRS and Raman), followed by post-screening and confirmation in the laboratory with MS-based methods. The spectroscopic techniques are suitable for on-site and on-line applications. Currently they are suited to detect fraudulent adulteration at relatively high levels but not to detect low level contamination. The potential use of the strategies for non-targeted analysis is demonstrated.
Resumo:
In a team of multiple agents, the pursuance of a common goal is a defining characteristic. Since agents may have different capabilities, and effects of actions may be uncertain, a common goal can generally only be achieved through a careful cooperation between the different agents. In this work, we propose a novel two-stage planner that combines online planning at both team level and individual level through a subgoal delegation scheme. The proposal brings the advantages of online planning approaches to the multi-agent setting. A number of modifications are made to a classical UCT approximate algorithm to (i) adapt it to the application domains considered, (ii) reduce the branching factor in the underlying search process, and (iii) effectively manage uncertain information of action effects by using information fusion mechanisms. The proposed online multi-agent planner reduces the cost of planning and decreases the temporal cost of reaching a goal, while significantly increasing the chance of success of achieving the common goal.