842 resultados para Good lives model
Resumo:
Adolescence is a pivotal period offering both opportunities and constraints on individual development. It is during this important time that one decides upon and commits to the values, goals, and beliefs which will form one's identity and guide one throughout the lifespan. Positive youth development programs, such as the Miami Youth Development Project's Changing Lives Program, target the formation of a positive sense of identity as a critical intervention point. Through developing a coherent and positive sense of self, adolescents take control of and responsibility for their lives and their decisions. Furthermore, a positive identity has been found to be a developmental asset and is linked to lower risk behaviors and positive outcomes including increased self-esteem, sense of purpose, and a positive view of the future. Positive youth development programs, which promote positive identity development, have been found to be more strongly tied to positive outcomes including skills, values, and competencies than have contextual opportunities. As such, it is critical to determine what leads to positive identity development. ^ The current study used structural equation modeling to evaluate three potential mediators of identity development. Findings indicated good model fit where change in identity commitment and change in identity exploration were mediated by informational identity style, personal expressiveness, and identity distress. There were also significant differences found between the control and intervention groups indicative of intervention effects. The findings of the current study suggest potential areas of intervention as well as the need for further research including longitudinal study and the use of qualitative methodology. ^
Resumo:
This dissertation focused on the longitudinal analysis of business start-ups using three waves of data from the Kauffman Firm Survey. ^ The first essay used the data from years 2004-2008, and examined the simultaneous relationship between a firm's capital structure, human resource policies, and its impact on the level of innovation. The firm leverage was calculated as, debt divided by total financial resources. Index of employee well-being was determined by a set of nine dichotomous questions asked in the survey. A negative binomial fixed effects model was used to analyze the effect of employee well-being and leverage on the count data of patents and copyrights, which were used as a proxy for innovation. The paper demonstrated that employee well-being positively affects the firm's innovation, while a higher leverage ratio had a negative impact on the innovation. No significant relation was found between leverage and employee well-being.^ The second essay used the data from years 2004-2009, and inquired whether a higher entrepreneurial speed of learning is desirable, and whether there is a linkage between the speed of learning and growth rate of the firm. The change in the speed of learning was measured using a pooled OLS estimator in repeated cross-sections. There was evidence of a declining speed of learning over time, and it was concluded that a higher speed of learning is not necessarily a good thing, because speed of learning is contingent on the entrepreneur's initial knowledge, and the precision of the signals he receives from the market. Also, there was no reason to expect speed of learning to be related to the growth of the firm in one direction over another.^ The third essay used the data from years 2004-2010, and determined the timing of diversification activities by the business start-ups. It captured when a start-up diversified for the first time, and explored the association between an early diversification strategy adopted by a firm, and its survival rate. A semi-parametric Cox proportional hazard model was used to examine the survival pattern. The results demonstrated that firms diversifying at an early stage in their lives show a higher survival rate; however, this effect fades over time.^
Resumo:
Within the Stage II program evaluation of the Miami Youth Development Project's (YDP) Changing Lives Program (CLP), this study evaluated CLP intervention effectiveness in promoting positive change in emotion-focused identity exploration (i.e. feelings of personal expressiveness; PE) and a "negative" symptom of identity development (i.e. identity distress; ID) as a first step toward the investigation of a self-transformative model of identity development in adolescent youth. Using structural equation modeling techniques, this study found that participation in the CLP is associated with positive changes in PE (path = .841, p < .002), but not changes in ID. Increase in ID scores was found to be associated with increases in PE (path = .229, p < .002), as well. Intervention effects were not moderated by age/stage, gender, or ethnicity, though differences were found in the degree to which participating subgroups (African-American/Hispanic, male/female, 14-16 years old/17-19 years old) experience change in PE and ID. Findings also suggest that moderate levels of ID may not be deleterious to identity exploration and may be associated with active exploration. ^
Resumo:
Non-Destructive Testing (NDT) of deep foundations has become an integral part of the industry's standard manufacturing processes. It is not unusual for the evaluation of the integrity of the concrete to include the measurement of ultrasonic wave speeds. Numerous methods have been proposed that use the propagation speed of ultrasonic waves to check the integrity of concrete for drilled shaft foundations. All such methods evaluate the integrity of the concrete inside the cage and between the access tubes. The integrity of the concrete outside the cage remains to be considered to determine the location of the border between the concrete and the soil in order to obtain the diameter of the drilled shaft. It is also economic to devise a methodology to obtain the diameter of the drilled shaft using the Cross-Hole Sonic Logging system (CSL). Performing such a methodology using the CSL and following the CSL tests is performed and used to check the integrity of the inside concrete, thus allowing the determination of the drilled shaft diameter without having to set up another NDT device.^ This proposed new method is based on the installation of galvanized tubes outside the shaft across from each inside tube, and performing the CSL test between the inside and outside tubes. From the performed experimental work a model is developed to evaluate the relationship between the thickness of concrete and the ultrasonic wave properties using signal processing. The experimental results show that there is a direct correlation between concrete thicknesses outside the cage and maximum amplitude of the received signal obtained from frequency domain data. This study demonstrates how this new method to measuring the diameter of drilled shafts during construction using a NDT method overcomes the limitations of currently-used methods. ^ In the other part of study, a new method is proposed to visualize and quantify the extent and location of the defects. It is based on a color change in the frequency amplitude of the signal recorded by the receiver probe in the location of defects and it is called Frequency Tomography Analysis (FTA). Time-domain data is transferred to frequency-domain data of the signals propagated between tubes using Fast Fourier Transform (FFT). Then, distribution of the FTA will be evaluated. This method is employed after CSL has determined the high probability of an anomaly in a given area and is applied to improve location accuracy and to further characterize the feature. The technique has a very good resolution and clarifies the exact depth location of any void or defect through the length of the drilled shaft for the voids inside the cage. ^ The last part of study also evaluates the effect of voids inside and outside the reinforcement cage and corrosion in the longitudinal bars on the strength and axial load capacity of drilled shafts. The objective is to quantify the extent of loss in axial strength and stiffness of drilled shafts due to presence of different types of symmetric voids and corrosion throughout their lengths.^
Resumo:
Ensemble Stream Modeling and Data-cleaning are sensor information processing systems have different training and testing methods by which their goals are cross-validated. This research examines a mechanism, which seeks to extract novel patterns by generating ensembles from data. The main goal of label-less stream processing is to process the sensed events to eliminate the noises that are uncorrelated, and choose the most likely model without over fitting thus obtaining higher model confidence. Higher quality streams can be realized by combining many short streams into an ensemble which has the desired quality. The framework for the investigation is an existing data mining tool. First, to accommodate feature extraction such as a bush or natural forest-fire event we make an assumption of the burnt area (BA*), sensed ground truth as our target variable obtained from logs. Even though this is an obvious model choice the results are disappointing. The reasons for this are two: One, the histogram of fire activity is highly skewed. Two, the measured sensor parameters are highly correlated. Since using non descriptive features does not yield good results, we resort to temporal features. By doing so we carefully eliminate the averaging effects; the resulting histogram is more satisfactory and conceptual knowledge is learned from sensor streams. Second is the process of feature induction by cross-validating attributes with single or multi-target variables to minimize training error. We use F-measure score, which combines precision and accuracy to determine the false alarm rate of fire events. The multi-target data-cleaning trees use information purity of the target leaf-nodes to learn higher order features. A sensitive variance measure such as ƒ-test is performed during each node's split to select the best attribute. Ensemble stream model approach proved to improve when using complicated features with a simpler tree classifier. The ensemble framework for data-cleaning and the enhancements to quantify quality of fitness (30% spatial, 10% temporal, and 90% mobility reduction) of sensor led to the formation of streams for sensor-enabled applications. Which further motivates the novelty of stream quality labeling and its importance in solving vast amounts of real-time mobile streams generated today.
Resumo:
Within the Stage II program evaluation of the Miami Youth Development Project's (YDP) Changing Lives Program (CLP), this study evaluated CLP intervention effectiveness in promoting positive change in emotion-focused identity exploration (i.e. feelings of personal expressiveness; PE) and a "negative" symptom of identity development (i.e. identity distress; ID) as a first step toward the investigation of a self-transformative model of identity development in adolescent youth. Using structural equation modeling techniques, this study found that participation in the CLP is associated with positive changes in PE (path = .841, p < .002), but not changes in ID. Increase in ID scores was found to be associated with increases in PE (path = .229, p < .002), as well. Intervention effects were not moderated by age/stage, gender, or ethnicity, though differences were found in the degree to which participating subgroups (African- American/Hispanic, male/female, 14-16 years old/17-19 years old) experience change in PE and ID. Findings also suggest that moderate levels of ID may not be deleterious to identity exploration and may be associated with active exploration.
Resumo:
SuperScaling model (SuSA) predictions to neutrino-induced charged-current pi(+) production in the Delta-resonance region are explored under MiniBooNE experimental conditions. The SuSA charged-current pi(+) results are in good agreement with data on neutrino flux-averaged double-differential cross sections. The SuSA model for quasielastic scattering and its extension to the pion production region are used for predictions of charged-current inclusive neutrino-nucleus cross sections. Results are also compared with the T2K experimental data for inclusive scattering.
Resumo:
In several areas of health professionals (pediatricians, nutritionists, orthopedists, endocrinologists, dentists, etc.) are used in the assessment of bone age to diagnose growth disorders in children. Through interviews with specialists in diagnostic imaging and research done in the literature, we identified the TW method - Tanner and Whitehouse as the most efficient. Even achieving better results than other methods, it is still not the most used, due to the complexity of their use. This work presents the possibility of automation of this method and therefore that its use more widespread. Also in this work, they are met two important steps in the evaluation of bone age, identification and classification of regions of interest. Even in the radiography in which the positioning of the hands were not suitable for TW method, the identification algorithm of the fingers showed good results. As the use AAM - Active Appearance Models showed good results in the identification of regions of interest even in radiographs with high contrast and brightness variation. It has been shown through appearance, good results in the classification of the epiphysis in their stages of development, being chosen the average epiphysis finger III (middle) to show the performance. The final results show an average percentage of 90% hit and misclassified, it was found that the error went away just one stage of the correct stage.
Resumo:
Teachers can have a profound effect on us all, both good and bad. In this paper the effect two individual midwives had on my evolution as a midwife will be examined. They were very different: one was formal and the other informal. The classroom was the setting for one, the clinical area for the other. Each had her own unique style and way of looking at the world. One was very different from the other in manner and in approach. However they each shared a philosophy of women centred, normal birth which they espoused in all aspects of their working lives.
Resumo:
Today, the trend towards decentralization is far-reaching. Proponents of decentralization have argued that decentralization promotes responsive and accountable local government by shortening the distance between local representatives and their constituency. However, in this paper, I focus on the countervailing effect of decentralization on the accountability mechanism, arguing that decentralization, which increases the number of actors eligible for policy making and implementation in governance as a whole, may blur lines of responsibility, thus weakening citizens’ ability to sanction government in election. By using the ordinary least squares (OLS) interaction model based on historical panel data for 78 countries in the 2002 – 2010 period, I test the hypothesis that as the number of government tiers increases, there will be a negative interaction between the number of government tiers and decentralization policies. The regression results show empirical evidence that decentralization policies, having a positive impact on governance under a relatively simple form of multilevel governance, have no more statistically significant effects as the complexity of government structure exceeds a certain degree. In particular, this paper found that the presence of intergovernmental meeting with legally binding authority have a negative impact on governance when the complexity of government structure reaches to the highest level.
Resumo:
The Model for Prediction Across Scales (MPAS) is a novel set of Earth system simulation components and consists of an atmospheric model, an ocean model and a land-ice model. Its distinct features are the use of unstructured Voronoi meshes and C-grid discretisation to address shortcomings of global models on regular grids and the use of limited area models nested in a forcing data set, with respect to parallel scalability, numerical accuracy and physical consistency. This concept allows one to include the feedback of regional land use information on weather and climate at local and global scales in a consistent way, which is impossible to achieve with traditional limited area modelling approaches. Here, we present an in-depth evaluation of MPAS with regards to technical aspects of performing model runs and scalability for three medium-size meshes on four different high-performance computing (HPC) sites with different architectures and compilers. We uncover model limitations and identify new aspects for the model optimisation that are introduced by the use of unstructured Voronoi meshes. We further demonstrate the model performance of MPAS in terms of its capability to reproduce the dynamics of the West African monsoon (WAM) and its associated precipitation in a pilot study. Constrained by available computational resources, we compare 11-month runs for two meshes with observations and a reference simulation from the Weather Research and Forecasting (WRF) model. We show that MPAS can reproduce the atmospheric dynamics on global and local scales in this experiment, but identify a precipitation excess for the West African region. Finally, we conduct extreme scaling tests on a global 3?km mesh with more than 65 million horizontal grid cells on up to half a million cores. We discuss necessary modifications of the model code to improve its parallel performance in general and specific to the HPC environment. We confirm good scaling (70?% parallel efficiency or better) of the MPAS model and provide numbers on the computational requirements for experiments with the 3?km mesh. In doing so, we show that global, convection-resolving atmospheric simulations with MPAS are within reach of current and next generations of high-end computing facilities.
Resumo:
Investigating the variability of Agulhas leakage, the volume transport of water from the Indian Ocean to the South Atlantic Ocean, is highly relevant due to its potential contribution to the Atlantic Meridional Overturning Circulation as well as the global circulation of heat and salt and hence global climate. Quantifying Agulhas leakage is challenging due to the non-linear nature of this process; current observations are insufficient to estimate its variability and ocean models all have biases in this region, even at high resolution . An Eulerian threshold integration method is developed to examine the mechanisms of Agulhas leakage variability in six ocean model simulations of varying resolution. This intercomparison, based on the circulation and thermo- haline structure at the Good Hope line, a transect to the south west of the southern tip of Africa, is used to identify features that are robust regardless of the model used and takes into account the thermohaline biases of each model. When determined by a passive tracer method, 60 % of the magnitude of Agulhas leakage is captured and more than 80 % of its temporal fluctuations, suggesting that the method is appropriate for investigating the variability of Agulhas leakage. In all simulations but one, the major driver of variability is associated with mesoscale features passing through the section. High resolution (<1/10 deg.) hindcast models agree on the temporal (2–4 cycles per year) and spatial (300–500 km) scales of these features corresponding to observed Agulhas Rings. Coarser resolution models (<1/4 deg.) reproduce similar time scale of variability of Agulhas leakage in spite of their difficulties in representing the Agulhas rings properties. A coarser resolution climate model (2 deg.) does not resolve the spatio-temporal mechanism of variability of Agulhas leakage. Hence it is expected to underestimate the contribution of Agulhas Current System to climate variability.
Resumo:
Investigating the variability of Agulhas leakage, the volume transport of water from the Indian Ocean to the South Atlantic Ocean, is highly relevant due to its potential contribution to the Atlantic Meridional Overturning Circulation as well as the global circulation of heat and salt and hence global climate. Quantifying Agulhas leakage is challenging due to the non-linear nature of this process; current observations are insufficient to estimate its variability and ocean models all have biases in this region, even at high resolution . An Eulerian threshold integration method is developed to examine the mechanisms of Agulhas leakage variability in six ocean model simulations of varying resolution. This intercomparison, based on the circulation and thermo- haline structure at the Good Hope line, a transect to the south west of the southern tip of Africa, is used to identify features that are robust regardless of the model used and takes into account the thermohaline biases of each model. When determined by a passive tracer method, 60 % of the magnitude of Agulhas leakage is captured and more than 80 % of its temporal fluctuations, suggesting that the method is appropriate for investigating the variability of Agulhas leakage. In all simulations but one, the major driver of variability is associated with mesoscale features passing through the section. High resolution (<1/10 deg.) hindcast models agree on the temporal (2–4 cycles per year) and spatial (300–500 km) scales of these features corresponding to observed Agulhas Rings. Coarser resolution models (<1/4 deg.) reproduce similar time scale of variability of Agulhas leakage in spite of their difficulties in representing the Agulhas rings properties. A coarser resolution climate model (2 deg.) does not resolve the spatio-temporal mechanism of variability of Agulhas leakage. Hence it is expected to underestimate the contribution of Agulhas Current System to climate variability.
Resumo:
Legionella pneumophila, the causative agent of a severe pneumonia named Legionnaires' disease, is an important human pathogen that infects and replicates within alveolar macrophages. Its virulence depends on the Dot/Icm type IV secretion system (T4SS), which is essential to establish a replication permissive vacuole known as the Legionella containing vacuole (LCV). L. pneumophila infection can be modeled in mice however most mouse strains are not permissive, leading to the search for novel infection models. We have recently shown that the larvae of the wax moth Galleria mellonella are suitable for investigation of L. pneumophila infection. G. mellonella is increasingly used as an infection model for human pathogens and a good correlation exists between virulence of several bacterial species in the insect and in mammalian models. A key component of the larvae's immune defenses are hemocytes, professional phagocytes, which take up and destroy invaders. L. pneumophila is able to infect, form a LCV and replicate within these cells. Here we demonstrate protocols for analyzing L. pneumophila virulence in the G. mellonella model, including how to grow infectious L. pneumophila, pretreat the larvae with inhibitors, infect the larvae and how to extract infected cells for quantification and immunofluorescence microscopy. We also describe how to quantify bacterial replication and fitness in competition assays. These approaches allow for the rapid screening of mutants to determine factors important in L. pneumophila virulence, describing a new tool to aid our understanding of this complex pathogen.
Resumo:
Motivated by environmental protection concerns, monitoring the flue gas of thermal power plant is now often mandatory due to the need to ensure that emission levels stay within safe limits. Optical based gas sensing systems are increasingly employed for this purpose, with regression techniques used to relate gas optical absorption spectra to the concentrations of specific gas components of interest (NOx, SO2 etc.). Accurately predicting gas concentrations from absorption spectra remains a challenging problem due to the presence of nonlinearities in the relationships and the high-dimensional and correlated nature of the spectral data. This article proposes a generalized fuzzy linguistic model (GFLM) to address this challenge. The GFLM is made up of a series of “If-Then” fuzzy rules. The absorption spectra are input variables in the rule antecedent. The rule consequent is a general nonlinear polynomial function of the absorption spectra. Model parameters are estimated using least squares and gradient descent optimization algorithms. The performance of GFLM is compared with other traditional prediction models, such as partial least squares, support vector machines, multilayer perceptron neural networks and radial basis function networks, for two real flue gas spectral datasets: one from a coal-fired power plant and one from a gas-fired power plant. The experimental results show that the generalized fuzzy linguistic model has good predictive ability, and is competitive with alternative approaches, while having the added advantage of providing an interpretable model.