952 resultados para Three models
Resumo:
The implementation of collaborative planning and teaching models in ten flexibly scheduled elementary and middle school library media centers was studied to determine which factors facilitated the collaborative planning process and to learn what occurs when library media specialists (LMSs) and classroom teachers (CTs) plan together. In this qualitative study, 61 principals, CTs, and LMSs were interviewed on a range of topics including the principal's role, school climate, the value of team planning, the importance of information literacy instruction, and the ideal learning environment. Other data sources were observations, videotapes of planning sessions, and documents. This three-year school reform effort was funded by the Library Power Project to improve library programs, to encourage collaborative planning, and to increase curricular integration of information literacy skills instruction. ^ The findings included a description of typical planning sessions and the identification of several major factors which impacted the success of collaborative planning: the individuals involved, school climate, time for planning, the organization of the school, the facility and collection, and training. Of these factors, the characteristics and actions of the people involved were most critical to the implementation of the innovation. The LMS was the pivotal player and, in the views of CTs, principals, and LMSs themselves, must be knowledgeable about curriculum, the library collection, and instructional design and delivery; must be open and welcoming to CTs and use good interpersonal skills; and must be committed to information literacy instruction and willing to act as a change agent. The support of the principal was vital; in schools with successful programs, the principal served as an advocate for collaborative planning and information literacy instruction, provided financial support for the library program including clerical staff, and arranged for LMSs and CTs to have time during the school day to plan together. ^ CTs involved in positive planning partnerships with LMSs were flexible, were open to change, used a variety of instructional materials, expected students to be actively involved in their own learning, and were willing to team teach with LMSs. Most CTs planning with LMSs made lesson plans in advance and preferred to plan with others. Also, most CTs in this study planned with grade level or departmental groups, which expedited the delivery of information literacy instruction and the effective use of planning time. ^ Implications of the findings of this research project were discussed for individual schools, for school districts, and for colleges and universities training LMSs, CTs, and administrators. Suggestions for additional research were also included. ^
Resumo:
In this dissertation, I investigate three related topics on asset pricing: the consumption-based asset pricing under long-run risks and fat tails, the pricing of VIX (CBOE Volatility Index) options and the market price of risk embedded in stock returns and stock options. These three topics are fully explored in Chapter II through IV. Chapter V summarizes the main conclusions. In Chapter II, I explore the effects of fat tails on the equilibrium implications of the long run risks model of asset pricing by introducing innovations with dampened power law to consumption and dividends growth processes. I estimate the structural parameters of the proposed model by maximum likelihood. I find that the stochastic volatility model with fat tails can, without resorting to high risk aversion, generate implied risk premium, expected risk free rate and their volatilities comparable to the magnitudes observed in data. In Chapter III, I examine the pricing performance of VIX option models. The contention that simpler-is-better is supported by the empirical evidence using actual VIX option market data. I find that no model has small pricing errors over the entire range of strike prices and times to expiration. In general, Whaley’s Black-like option model produces the best overall results, supporting the simpler-is-better contention. However, the Whaley model does under/overprice out-of-the-money call/put VIX options, which is contrary to the behavior of stock index option pricing models. In Chapter IV, I explore risk pricing through a model of time-changed Lvy processes based on the joint evidence from individual stock options and underlying stocks. I specify a pricing kernel that prices idiosyncratic and systematic risks. This approach to examining risk premia on stocks deviates from existing studies. The empirical results show that the market pays positive premia for idiosyncratic and market jump-diffusion risk, and idiosyncratic volatility risk. However, there is no consensus on the premium for market volatility risk. It can be positive or negative. The positive premium on idiosyncratic risk runs contrary to the implications of traditional capital asset pricing theory.
Resumo:
This dissertation consists of three theoretical essays on immigration, international trade and political economy. The first two essays analyze the political economy of immigration in developed countries. The third essay explores new ground on the effects of labor liberalization in developing countries. Trade economists have witnessed remarkable methodological developments in mathematical and game theoretical models during the last seventy years. This dissertation benefits from these advances to analyze economic issues related to immigration. The first essay applies a long run general equilibrium trade model similar to Krugman (1980), and blends it with the median voter ala-Mayer (1984) framework. The second essay uses a short run general equilibrium specific factor trade model similar to Jones (1975) and incorporates it with the median voter model similar to Benhabib (1997). The third essay employs a five stage game theoretical approach similar to Vogel (2007) and solves it by the method of backward induction. The first essay shows that labor liberalization is more likely to come about in societies that have more taste for varieties, and that workers and capital owners could share the same positive stance toward labor liberalization. In a dynamic model, it demonstrates that the median voter is willing to accept fewer immigrants in the first period in order to preserve her domestic political influence in the second period threatened by the naturalization of these immigrants. The second essay shows that the liberalization of labor depends on the host country's stock and distribution of capital, and the number of groups of skilled workers within each country. I demonstrate that the more types of goods both countries produce, the more liberal the host country is toward immigration. The third essay proposes a theory of free movement of goods and labor between two economies with imperfect labor contracts. The heart of my analysis lies in the determinants of talent development where individuals' decisions to emigrate are related to the fixed costs of emigration. Finally, free trade and labor affect income via an indirect effect on individuals' incentives to invest in the skill levels and a direct effect on the prices of goods.
Resumo:
The distinctive karstic, freshwater wetlands of the northern Caribbean and Central American region support the prolific growth of calcite-rich periphyton mats. Aside from the Everglades, very little research has been conducted in these karstic wetlands, which are increasingly threatened by eutrophication. This study sought to (i) test the hypothesis that water depth and periphyton total phosphorus (TP) content are both drivers of periphyton biomass in karstic wetland habitats in Belize, Mexico and Jamaica, (ii) provide a taxonomic inventory of the periphytic diatom species in these wetlands and (iii) examine the relationship between periphyton mat TP concentration and diatom assemblage at Everglades and Caribbean locations. ^ Periphyton biomass, nutrient and diatom assemblage data were generated from periphyton mat samples collected from shallow, marl-based wetlands in Belize, Mexico and Jamaica. These data were compared to a larger dataset collected from comparable sites within Everglades National Park. A diatom taxonomic inventory was conducted on the Caribbean samples and a combination of ordination and weighted-averaging modeling techniques were used to compare relationships between periphyton TP concentration, periphyton biomass and diatom assemblage composition among the locations. ^ Within the Everglades, periphyton biomass showed a negative correlation with water depth and mat TP, while periphyton mat percent organic content was positively correlated with these two variables. These patterns were also exhibited within the Belize, Mexico and Jamaica locations, suggesting that water depth and periphyton TP content are both drivers of periphyton biomass in karstic wetland systems within the northern Caribbean region. ^ A total of 146 diatom species representing 39 genera were recorded from the three Caribbean locations, including a distinct core group of species that may be endemic to this habitat type. Weighted averaging models were produced that effectively predicted mat TP concentration from diatom assemblages for both Everglades (R2=0.56) and Caribbean (R2=0.85) locations. There were, however, significant differences among Everglades and Caribbean locations with respect to species TP optima and indicator species. This suggests that although diatoms are effective indicators of water quality in these wetlands, differences in species response to water quality changes can reduce the predictive power of these indices when applied across systems. ^
Resumo:
This dissertation aimed to improve travel time estimation for the purpose of transportation planning by developing a travel time estimation method that incorporates the effects of signal timing plans, which were difficult to consider in planning models. For this purpose, an analytical model has been developed. The model parameters were calibrated based on data from CORSIM microscopic simulation, with signal timing plans optimized using the TRANSYT-7F software. Independent variables in the model are link length, free-flow speed, and traffic volumes from the competing turning movements. The developed model has three advantages compared to traditional link-based or node-based models. First, the model considers the influence of signal timing plans for a variety of traffic volume combinations without requiring signal timing information as input. Second, the model describes the non-uniform spatial distribution of delay along a link, this being able to estimate the impacts of queues at different upstream locations of an intersection and attribute delays to a subject link and upstream link. Third, the model shows promise of improving the accuracy of travel time prediction. The mean absolute percentage error (MAPE) of the model is 13% for a set of field data from Minnesota Department of Transportation (MDOT); this is close to the MAPE of uniform delay in the HCM 2000 method (11%). The HCM is the industrial accepted analytical model in the existing literature, but it requires signal timing information as input for calculating delays. The developed model also outperforms the HCM 2000 method for a set of Miami-Dade County data that represent congested traffic conditions, with a MAPE of 29%, compared to 31% of the HCM 2000 method. The advantages of the proposed model make it feasible for application to a large network without the burden of signal timing input, while improving the accuracy of travel time estimation. An assignment model with the developed travel time estimation method has been implemented in a South Florida planning model, which improved assignment results.
Resumo:
This dissertation studies newly founded U.S. firms' survival using three different releases of the Kauffman Firm Survey. I study firms' survival from a different perspective in each chapter. ^ The first essay studies firms' survival through an analysis of their initial state at startup and the current state of the firms as they gain maturity. The probability of survival is determined using three probit models, using both firm-specific variables and an industry scale variable to control for the environment of operation. The firm's specific variables include size, experience and leverage as a debt-to-value ratio. The results indicate that size and relevant experience are both positive predictors for the initial and current states. Debt appears to be a predictor of exit if not justified wisely by acquiring assets. As suggested previously in the literature, entering a smaller-scale industry is a positive predictor of survival from birth. Finally, a smaller-scale industry diminishes the negative effects of debt. ^ The second essay makes use of a hazard model to confirm that new service-providing (SP) firms are more likely to survive than new product providers (PPs). I investigate the possible explanations for the higher survival rate of SPs using a Cox proportional hazard model. I examine six hypotheses (variations in capital per worker, expenses per worker, owners' experience, industry wages, assets and size), none of which appear to explain why SPs are more likely than PPs to survive. Two other possibilities are discussed: tax evasion and human/social relations, but these could not be tested due to lack of data. ^ The third essay investigates women-owned firms' higher failure rates using a Cox proportional hazard on two models. I make use of a never-before used variable that proxies for owners' confidence. This variable represents the owners' self-evaluated competitive advantage. ^ The first empirical model allows me to compare women's and men's hazard rates for each variable. In the second model I successively add the variables that could potentially explain why women have a higher failure rate. Unfortunately, I am not able to fully explain the gender effect on the firms' survival. Nonetheless, the second empirical approach allows me to confirm that social and psychological differences among genders are important in explaining the higher likelihood to fail in women-owned firms.^
Resumo:
Limestone-based (karstic) freshwater wetlands of the Everglades, Belize, Mexico, and Jamaica are distinctive in having a high biomass of CaCO3-rich periphyton mats. Diatoms are common components of these mats and show predictable responses to environmental variation, making them good candidates for assessing nutrient enrichment in these naturally ultraoligotrophic wetlands. However, aside from in the Everglades of southern Florida, very little research has been done to document the diatoms and their environmental preferences in karstic Caribbean wetlands, which are increasingly threatened by eutrophication. We identified diatoms in periphyton mats collected during wet and dry periods from the Everglades and similar freshwater karstic wetlands in Belize, Mexico, and Jamaica. We compared diatom assemblage composition and diversity among locations and periods, and the effect of the limiting nutrient, P, on species composition among locations. We used periphyton-mat total P (TP) as a metric of availability. A total of 176 diatom species in 45 genera were recorded from the 4 locations. Twenty-three of these species, including 9 that are considered indicative of Everglades diatom flora, were found in all 4 locations. In Everglades and Caribbean sites, we identified assemblages and indicator species associated with low and high periphyton-mat TP and calculated TP optima and tolerances for each indicator species. TP optima and tolerances of indicator species differed between the Everglades and the Caribbean, but weighted averaging models predicted periphyton-mat TP concentrations from diatom assemblages at Everglades (R2 = 0.56) and Caribbean (R2 = 0.85) locations. These results show that diatoms can be effective indicators of water quality in karstic wetlands of the Caribbean, but application of regionally generated transfer functions to distant sites provides less reliable estimates than locally developed functions.
Resumo:
Quantitative Structure-Activity Relationship (QSAR) has been applied extensively in predicting toxicity of Disinfection By-Products (DBPs) in drinking water. Among many toxicological properties, acute and chronic toxicities of DBPs have been widely used in health risk assessment of DBPs. These toxicities are correlated with molecular properties, which are usually correlated with molecular descriptors. The primary goals of this thesis are: (1) to investigate the effects of molecular descriptors (e.g., chlorine number) on molecular properties such as energy of the lowest unoccupied molecular orbital (E LUMO) via QSAR modelling and analysis; (2) to validate the models by using internal and external cross-validation techniques; (3) to quantify the model uncertainties through Taylor and Monte Carlo Simulation. One of the very important ways to predict molecular properties such as ELUMO is using QSAR analysis. In this study, number of chlorine (NCl ) and number of carbon (NC) as well as energy of the highest occupied molecular orbital (EHOMO) are used as molecular descriptors. There are typically three approaches used in QSAR model development: (1) Linear or Multi-linear Regression (MLR); (2) Partial Least Squares (PLS); and (3) Principle Component Regression (PCR). In QSAR analysis, a very critical step is model validation after QSAR models are established and before applying them to toxicity prediction. The DBPs to be studied include five chemical classes: chlorinated alkanes, alkenes, and aromatics. In addition, validated QSARs are developed to describe the toxicity of selected groups (i.e., chloro-alkane and aromatic compounds with a nitro- or cyano group) of DBP chemicals to three types of organisms (e.g., Fish, T. pyriformis, and P.pyosphoreum) based on experimental toxicity data from the literature. The results show that: (1) QSAR models to predict molecular property built by MLR, PLS or PCR can be used either to select valid data points or to eliminate outliers; (2) The Leave-One-Out Cross-Validation procedure by itself is not enough to give a reliable representation of the predictive ability of the QSAR models, however, Leave-Many-Out/K-fold cross-validation and external validation can be applied together to achieve more reliable results; (3) E LUMO are shown to correlate highly with the NCl for several classes of DBPs; and (4) According to uncertainty analysis using Taylor method, the uncertainty of QSAR models is contributed mostly from NCl for all DBP classes.
Resumo:
In this dissertation, I investigate three related topics on asset pricing: the consumption-based asset pricing under long-run risks and fat tails, the pricing of VIX (CBOE Volatility Index) options and the market price of risk embedded in stock returns and stock options. These three topics are fully explored in Chapter II through IV. Chapter V summarizes the main conclusions. In Chapter II, I explore the effects of fat tails on the equilibrium implications of the long run risks model of asset pricing by introducing innovations with dampened power law to consumption and dividends growth processes. I estimate the structural parameters of the proposed model by maximum likelihood. I find that the stochastic volatility model with fat tails can, without resorting to high risk aversion, generate implied risk premium, expected risk free rate and their volatilities comparable to the magnitudes observed in data. In Chapter III, I examine the pricing performance of VIX option models. The contention that simpler-is-better is supported by the empirical evidence using actual VIX option market data. I find that no model has small pricing errors over the entire range of strike prices and times to expiration. In general, Whaley’s Black-like option model produces the best overall results, supporting the simpler-is-better contention. However, the Whaley model does under/overprice out-of-the-money call/put VIX options, which is contrary to the behavior of stock index option pricing models. In Chapter IV, I explore risk pricing through a model of time-changed Lévy processes based on the joint evidence from individual stock options and underlying stocks. I specify a pricing kernel that prices idiosyncratic and systematic risks. This approach to examining risk premia on stocks deviates from existing studies. The empirical results show that the market pays positive premia for idiosyncratic and market jump-diffusion risk, and idiosyncratic volatility risk. However, there is no consensus on the premium for market volatility risk. It can be positive or negative. The positive premium on idiosyncratic risk runs contrary to the implications of traditional capital asset pricing theory.
Resumo:
Hydrogeologic variables controlling groundwater exchange with inflow and flow-through lakes were simulated using a three-dimensional numerical model (MODFLOW) to investigate and quantify spatial patterns of lake bed seepage and hydraulic head distributions in the porous medium surrounding the lakes. Also, the total annual inflow and outflow were calculated as a percentage of lake volume for flow-through lake simulations. The general exponential decline of seepage rates with distance offshore was best demonstrated at lower anisotropy ratio (i.e., Kh/Kv = 1, 10), with increasing deviation from the exponential pattern as anisotropy was increased to 100 and 1000. 2-D vertical section models constructed for comparison with 3-D models showed that groundwater heads and seepages were higher in 3-D simulations. Addition of low conductivity lake sediments decreased seepage rates nearshore and increased seepage rates offshore in inflow lakes, and increased the area of groundwater inseepage on the beds of flow-through lakes. Introduction of heterogeneity into the medium decreased the water table and seepage ratesnearshore, and increased seepage rates offshore in inflow lakes. A laterally restricted aquifer located at the downgradient side of the flow-through lake increased the area of outseepage. Recharge rate, lake depth and lake bed slope had relatively little effect on the spatial patterns of seepage rates and groundwater exchange with lakes.
Resumo:
The sedimentary sections of three cores from the Celtic margin provide high-resolution records of the terrigenous fluxes during the last glacial cycle. A total of 21 14C AMS dates allow us to define age models with a resolution better than 100 yr during critical periods such as Heinrich events 1 and 2. Maximum sedimentary fluxes occurred at the Meriadzek Terrace site during the Last Glacial Maximum (LGM). Detailed X-ray imagery of core MD95-2002 from the Meriadzek Terrace shows no sedimentary structures suggestive of either deposition from high-density turbidity currents or significant erosion. Two paroxysmal terrigenous flux episodes have been identified. The first occurred after the deposition of Heinrich event 2 Canadian ice-rafted debris (IRD) and includes IRD from European sources. We suggest that the second represents an episode of deposition from turbid plumes, which precedes IRD deposition associated with Heinrich event 1. At the end of marine isotopic stage 2 (MIS 2) and the beginning of MIS 1 the highest fluxes are recorded on the Whittard Ridge where they correspond to deposition from turbidity current overflows. Canadian icebergs have rafted debris at the Celtic margin during Heinrich events 1, 2, 4 and 5. The high-resolution records of Heinrich events 1 and 2 show that in both cases the arrival of the Canadian icebergs was preceded by a European ice rafting precursor event, which took place about 1-1.5 kyr before. Two rafting episodes of European IRD also occurred immediately after Heinrich event 2 and just before Heinrich event 1. The terrigenous fluxes recorded in core MD95-2002 during the LGM are the highest reported at hemipelagic sites from the northwestern European margin. The magnitude of the Canadian IRD fluxes at Meriadzek Terrace is similar to those from oceanic sites.
Resumo:
The rainbow smelt (Osmerus mordax) is an anadromous teleost that produces type II antifreeze protein (AFP) and accumulates modest urea and high glycerol levels in plasma and tissues as adaptive cryoprotectant mechanisms in sub-zero temperatures. It is known that glyceroneogenesis occurs in liver via a branch in glycolysis and gluconeogenesis and is activated by low temperature; however, the precise mechanisms of glycerol synthesis and trafficking in smelt remain to be elucidated. The objective of this thesis was to provide further insight using functional genomic techniques [e.g. suppression subtractive hybridization (SSH) cDNA library construction, microarray analyses] and molecular analyses [e.g. cloning, quantitative reverse transcription - polymerase chain reaction (QPCR)]. Novel molecular mechanisms related to glyceroneogenesis were deciphered by comparing the transcript expression profiles of glycerol (cold temperature) and non-glycerol (warm temperature) accumulating hepatocytes (Chapter 2) and livers from intact smelt (Chapter 3). Briefly, glycerol synthesis can be initiated from both amino acids and carbohydrate; however carbohydrate appears to be the preferred source when it is readily available. In glycerol accumulating hepatocytes, levels of the hepatic glucose transporter (GLUT2) plummeted and transcript levels of a suite of genes (PEPCK, MDH2, AAT2, GDH and AQP9) associated with the mobilization of amino acids to fuel glycerol synthesis were all transiently higher. In contrast, in glycerol accumulating livers from intact smelt, glycerol synthesis was primarily fuelled by glycogen degradation with higher PGM and PFK (glycolysis) transcript levels. Whether initiated from amino acids or carbohydrate, there were common metabolic underpinnings. Increased PDK2 (an inhibitor of PDH) transcript levels would direct pyruvate derived from amino acids and / or DHAP derived from G6P to glycerol as opposed to oxidation via the citric acid cycle. Robust LIPL (triglyceride catabolism) transcript levels would provide free fatty acids that could be oxidized to fuel ATP synthesis. Increased cGPDH (glyceroneogenesis) transcript levels were not required for increased glycerol production, suggesting that regulation is more likely by post-translational modification. Finally, levels of a transcript potentially encoding glycerol-3-phosphatase, an enzyme not yet characterized in any vertebrate species, were transiently higher. These comparisons also led to the novel discoveries that increased G6Pase (glucose synthesis) and increased GS (glutamine synthesis) transcript levels were part of the low temperature response in smelt. Glucose may provide increased colligative protection against freezing; whereas glutamine could serve to store nitrogen released from amino acid catabolism in a non-toxic form and / or be used to synthesize urea via purine synthesis-uricolysis. Novel key aspects of cryoprotectant osmolyte (glycerol and urea) trafficking were elucidated by cloning and characterizing three aquaglyceroporin (GLP)-encoding genes from smelt at the gene and cDNA levels in Chapter 4. GLPs are integral membrane proteins that facilitate passive movement of water, glycerol and urea across cellular membranes. The highlight was the discovery that AQP10ba transcript levels always increase in posterior kidney only at low temperature. This AQP10b gene paralogue may have evolved to aid in the reabsorption of urea from the proximal tubule. This research has contributed significantly to a general understanding of the cold adaptation response in smelt, and more specifically to the development of a working scenario for the mechanisms involved in glycerol synthesis and trafficking in this species.
Resumo:
Thermal analysis of electronic devices is one of the most important steps for designing of modern devices. Precise thermal analysis is essential for designing an effective thermal management system of modern electronic devices such as batteries, LEDs, microelectronics, ICs, circuit boards, semiconductors and heat spreaders. For having a precise thermal analysis, the temperature profile and thermal spreading resistance of the device should be calculated by considering the geometry, property and boundary conditions. Thermal spreading resistance occurs when heat enters through a portion of a surface and flows by conduction. It is the primary source of thermal resistance when heat flows from a tiny heat source to a thin and wide heat spreader. In this thesis, analytical models for modeling the temperature behavior and thermal resistance in some common geometries of microelectronic devices such as heat channels and heat tubes are investigated. Different boundary conditions for the system are considered. Along the source plane, a combination of discretely specified heat flux, specified temperatures and adiabatic condition are studied. Along the walls of the system, adiabatic or convective cooling boundary conditions are assumed. Along the sink plane, convective cooling with constant or variable heat transfer coefficient are considered. Also, the effect of orthotropic properties is discussed. This thesis contains nine chapters. Chapter one is the introduction and shows the concepts of thermal spreading resistance besides the originality and importance of the work. Chapter two reviews the literatures on the thermal spreading resistance in the past fifty years with a focus on the recent advances. In chapters three and four, thermal resistance of a twodimensional flux channel with non-uniform convection coefficient in the heat sink plane is studied. The non-uniform convection is modeled by using two functions than can simulate a wide variety of different heat sink configurations. In chapter five, a non-symmetrical flux channel with different heat transfer coefficient along the right and left edges and sink plane is analytically modeled. Due to the edge cooling and non-symmetry, the eigenvalues of the system are defined using the heat transfer coefficient on both edges and for satisfying the orthogonality condition, a normalized function is calculated. In chapter six, thermal behavior of two-dimensional rectangular flux channel with arbitrary boundary conditions on the source plane is presented. The boundary condition along the source plane can be a combination of the first kind boundary condition (Dirichlet or prescribed temperature) and the second kind boundary condition (Neumann or prescribed heat flux). The proposed solution can be used for modeling the flux channels with numerous different source plane boundary conditions without any limitations in the number and position of heat sources. In chapter seven, temperature profile of a circular flux tube with discretely specified boundary conditions along the source plane is presented. Also, the effect of orthotropic properties are discussed. In chapter 8, a three-dimensional rectangular flux channel with a non-uniform heat convection along the heat sink plane is analytically modeled. In chapter nine, a summary of the achievements is presented and some systems are proposed for the future studies. It is worth mentioning that all the models and case studies in the thesis are compared with the Finite Element Method (FEM).
Resumo:
This Ph.D. thesis addresses current issues with ichnotaxonomic practice, and characterizes an exceptionally well preserved ichnological assemblage from the Carboniferous Stainmore Formation, Northumberland, United Kingdom. Samples were collected from closely localized float representative of various units throughout the succession, which was deposited in a storm-dominated marine shoreface. Three dominant ichnotaxa were selected for three-dimensional morphological analysis due to their complicated morphology and/or unclear taxonomic status: 1) Dactyloidites jordii isp. nov.; 2) Beaconites capronus, and; 3) Neoeione moniliformis comb. nov. Using serial grinding and photography, these ichnotaxa were ground and modelled in true colour. High-resolution models of three taxa produced in this study are the basis of the first complete three-dimensional consideration of the traces, and forms the basis for refined palaeobiological and ethological analysis of these taxa. Dactyloidites jordii isp. nov. is a stellate to palmate burrow composed of numerous long, narrow rays that exhibit three orders of branching arranged into tiered galleries radiating from a central shaft. It is considered to be the feeding structure produced by a vermiform organism. Beaconites capronus is a winding trace with distinctly chevron-shaped, meniscate backfill demonstrated herein to backfill the vertical shafts associated with its burrows in a comparable fashion to the horizontal portion of the burrow. This lack of a surface connection would result in the trace making organism being exposed to low-oxygen porewater. Coping with this porewater dysoxia could be approached by burrowing organisms in a number of ways: 1) revisiting the sediment-water interface; 2) creating periodic shafts; or 3) employing anaerobic metabolism. Neoeione moniliformis was originally introduced as Eione moniliformis, however, the genus Eione Tate, 1859 is a junior homonym of Eione Rafinesque, 1814. This led to the transfer of Eione moniliformis to Parataenidium. Through careful examination and three-dimensional characterization of topotypes, the transfer to Parataenidium moniliformis is demonstrated herein to be problematic, as Parataenidium refers to primarily horizontal burrows with two distinct layers and Eione moniliformis is composed of one distinct level. As such, the new ichnogenus Neoeione is created to accommodate Neoeione moniliformis.
Resumo:
Parent-mediated early intervention programs depend on the willingness and ability of parents to complete prescribed activities with their children. In other contexts, internal factors, such as stages of change, and external factors, such as barriers to treatment, have been shown to correlate with adherence to service. This researcher modified the Stages of Change Questionnaire as well as the Barriers to Treatment Participation Scale (BTPS) to use with this population. Despite initial interest, twenty-three parent participants were referred to the researcher over the course of three years, with only five parents taking part in the study. A population base ten times that of the current sample would be required recruit enough participants (fifty-one) to provide sufficient power. This feasibility study discusses the results of the five parent participants. Findings suggest that the modified Stages of Change Questionnaire may not be sensitive enough for use with the current sample, while the modified BTPS may yield useful information for service providers.