879 resultados para model testing
Resumo:
In this paper, we propose a speech recognition engine using hybrid model of Hidden Markov Model (HMM) and Gaussian Mixture Model (GMM). Both the models have been trained independently and the respective likelihood values have been considered jointly and input to a decision logic which provides net likelihood as the output. This hybrid model has been compared with the HMM model. Training and testing has been done by using a database of 20 Hindi words spoken by 80 different speakers. Recognition rates achieved by normal HMM are 83.5% and it gets increased to 85% by using the hybrid approach of HMM and GMM.
Resumo:
Previous work has demonstrated that planning behaviours may be more adaptive than avoidance strategies in driving self-regulation, but ways of encouraging planning have not been investigated. The efficacy of an extended theory of planned behaviour (TPB) plus implementation intention based intervention to promote planning self-regulation in drivers across the lifespan was tested. An age stratified group of participants (N=81, aged 18-83 years) was randomly assigned to an experimental or control condition. The intervention prompted specific goal setting with action planning and barrier identification. Goal setting was carried out using an agreed behavioural contract. Baseline and follow-up measures of TPB variables, self-reported, driving self-regulation behaviours (avoidance and planning) and mobility goal achievements were collected using postal questionnaires. Like many previous efforts to change planned behaviour by changing its predictors using models of planned behaviour such as the TPB, results showed that the intervention did not significantly change any of the model components. However, more than 90% of participants achieved their primary driving goal, and self-regulation planning as measured on a self-regulation inventory was marginally improved. The study demonstrates the role of pre-decisional, or motivational components as contrasted with post-decisional goal enactment, and offers promise for the role of self-regulation planning and implementation intentions in assisting drivers in achieving their mobility goals and promoting safer driving across the lifespan, even in the context of unchanging beliefs such as perceived risk or driver anxiety.
Resumo:
2010 Mathematics Subject Classification: 62F10, 62F12.
Resumo:
Recent changes to the legislation on chemicals and cosmetics testing call for a change in the paradigm regarding the current 'whole animal' approach for identifying chemical hazards, including the assessment of potential neurotoxins. Accordingly, since 2004, we have worked on the development of the integrated co-culture of post-mitotic, human-derived neurons and astrocytes (NT2.N/A), for use as an in vitro functional central nervous system (CNS) model. We have used it successfully to investigate indicators of neurotoxicity. For this purpose, we used NT2.N/A cells to examine the effects of acute exposure to a range of test chemicals on the cellular release of brain-derived neurotrophic factor (BDNF). It was demonstrated that the release of this protective neurotrophin into the culture medium (above that of control levels) occurred consistently in response to sub-cytotoxic levels of known neurotoxic, but not non-neurotoxic, chemicals. These increases in BDNF release were quantifiable, statistically significant, and occurred at concentrations below those at which cell death was measureable, which potentially indicates specific neurotoxicity, as opposed to general cytotoxicity. The fact that the BDNF immunoassay is non-invasive, and that NT2.N/A cells retain their functionality for a period of months, may make this system useful for repeated-dose toxicity testing, which is of particular relevance to cosmetics testing without the use of laboratory animals. In addition, the production of NT2.N/A cells without the use of animal products, such as fetal bovine serum, is being explored, to produce a fully-humanised cellular model.
Resumo:
Constant load, progressive load and multipass nanoscratch (nanowear) tests were carried out on 500 and 1500 nm TiN coatings on M42 steel chosen as model systems. The influences of film thickness, coating roughness, scratch direction relative to the grinding grooves on the critical load in the progressive load test and number of cycles to failure in the wear test have been determined. Progress towards the development of a suitable methodology for determining the scratch hardness from nanoscratch tests is discussed. © 2011 W. S. Maney & Son Ltd.
Resumo:
There is a paucity of literature regarding the construction and operation of corporate identity at the stakeholder group level. This article examines corporate identity from the perspective of an individual stakeholder group, namely, front-line employees. A stakeholder group that is central to the development of an organization’s corporate identity as it spans an organization’s boundaries, frequently interacts with both internal and external stakeholders, and influences a firm’s financial performance by building customer loyalty and satisfaction. The article reviews the corporate identity, branding, services and social identity literatures to address how corporate identity manifests within the front-line employee stakeholder group, identifying what components comprise front-line employee corporate identity and assessing what contribution front-line employees make to constructing a strong and enduring corporate identity for an organization. In reviewing the literature the article develops propositions that, in conjunction with a conceptual model, constitute the generation of theory that is recommended for empirical testing.
Resumo:
The authors screened 34 large cattle herds for the presence of Mycoplasma bovis infection by examining slaughtered cattle for macroscopic lung lesions, by culturing M. bovis from lung lesions and at the same time by testing sera for the presence of antibodies against M. bovis. Among the 595 cattle examined, 33.9% had pneumonic lesions, mycoplasmas were isolated from 59.9% of pneumonic lung samples, and 10.9% of sera from those animals contained antibodies to M.bovis. In 25.2% of the cases M. bovis was isolated from lungs with no macroscopic lesions. The proportion of seropositive herds was 64.7%. The average seropositivity rate of individuals was 11.3% but in certain herds it exceeded 50%. A probability model was developed for examining the relationship among the occurrence of pneumonia, the isolation of M. bovis from the lungs and the presence of M. bovis specific antibodies in sera.
Resumo:
The search-experience-credence framework from economics of information, the human-environment relations models from environmental psychology, and the consumer evaluation process from services marketing provide a conceptual basis for testing the model of "Pre-purchase Information Utilization in Service Physical Environments." The model addresses the effects of informational signs, as a dimension of the service physical environment, on consumers' perceptions (perceived veracity and perceived performance risk), emotions (pleasure) and behavior (willingness to buy). The informational signs provide attribute quality information (search and experience) through non-personal sources of information (simulated word-of-mouth and non-personal advocate sources).^ This dissertation examines: (1) the hypothesized relationships addressed in the model of "Pre-purchase Information Utilization in Service Physical Environments" among informational signs, perceived veracity, perceived performance risk, pleasure, and willingness to buy, and (2) the effects of attribute quality information and sources of information on consumers' perceived veracity and perceived performance risk.^ This research is the first in-depth study about the role and effects of information in service physical environments. Using a 2 x 2 between subjects experimental research procedure, undergraduate students were exposed to the informational signs in a simulated service physical environment. The service physical environments were simulated through color photographic slides.^ The results of the study suggest that: (1) the relationship between informational signs and willingness to buy is mediated by perceived veracity, perceived performance risk and pleasure, (2) experience attribute information shows higher perceived veracity and lower perceived performance risk when compared to search attribute information, and (3) information provided through simulated word-of-mouth shows higher perceived veracity and lower perceived performance risk when compared to information provided through non-personal advocate sources. ^
Resumo:
This study identifies and describes HIV Voluntary Counseling and Testing (VCT) of middle aged and older Latinas. The rate of new cases of HIV in people age 45 and older is rapidly increasing, with a 40.6% increase in the numbers of older Latinas infected with HIV between 1998 and 2002. Despite this increase, there is paucity of research on this population. This research seeks to address the gap through a secondary data analysis of Latina women. The aim of this study is twofold: (1) Develop and empirically test a multivariate model of VCT utilization for middle aged and older Latinas; (2) To test how the three individual components of the Andersen Behavioral Model impact VCT for middle aged and older Latinas. The study is organized around the three major domains of the Andersen Behavioral Model of service use that include: (a) predisposing factors; (b) enabling characteristics and (c) need. Logistic regression using structural equation modeling techniques were used to test multivariate relationships of variables on VCT for a sample of 135 middle age and older Latinas residing in Miami-Dade County, Florida. Over 60% of participants had been tested for HIV. Provider endorsement was found to he the strongest predictor of VCT (odds ration [OR] 6.38), followed by having a clinic as a regular source of healthcare (OR=3.88). Significant negative associations with VCT included self rated health status (OR=.592); Age (OR=.927); Spanish proficiency (OR=.927); number of sexual partners (OR=.613) and consumption of alcohol during sexual activity (.549). As this line of inquiry provides a critical glimpse into the VCT of older Latinas, recommendations for enhanced service provision and research will he offered.
Resumo:
Ensuring the correctness of software has been the major motivation in software research, constituting a Grand Challenge. Due to its impact in the final implementation, one critical aspect of software is its architectural design. By guaranteeing a correct architectural design, major and costly flaws can be caught early on in the development cycle. Software architecture design has received a lot of attention in the past years, with several methods, techniques and tools developed. However, there is still more to be done, such as providing adequate formal analysis of software architectures. On these regards, a framework to ensure system dependability from design to implementation has been developed at FIU (Florida International University). This framework is based on SAM (Software Architecture Model), an ADL (Architecture Description Language), that allows hierarchical compositions of components and connectors, defines an architectural modeling language for the behavior of components and connectors, and provides a specification language for the behavioral properties. The behavioral model of a SAM model is expressed in the form of Petri nets and the properties in first order linear temporal logic.^ This dissertation presents a formal verification and testing approach to guarantee the correctness of Software Architectures. The Software Architectures studied are expressed in SAM. For the formal verification approach, the technique applied was model checking and the model checker of choice was Spin. As part of the approach, a SAM model is formally translated to a model in the input language of Spin and verified for its correctness with respect to temporal properties. In terms of testing, a testing approach for SAM architectures was defined which includes the evaluation of test cases based on Petri net testing theory to be used in the testing process at the design level. Additionally, the information at the design level is used to derive test cases for the implementation level. Finally, a modeling and analysis tool (SAM tool) was implemented to help support the design and analysis of SAM models. The results show the applicability of the approach to testing and verification of SAM models with the aid of the SAM tool.^
Resumo:
Stereotype threat (Steele & Aronson, 1995) refers to the risk of confirming a negative stereotype about one’s group in a particular performance domain. The theory assumes that performance in the stereotyped domain is most negatively affected when individuals are more highly identified with the domain in question. As federal law has increased the importance of standardized testing at the elementary level, it can be reasonably hypothesized that the standardized test performance of African American children will be depressed when they are aware of negative societal stereotypes about the academic competence of African Americans. This sequential mixed-methods study investigated whether the standardized testing experiences of African American children in an urban elementary school are related to their level of stereotype awareness. The quantitative phase utilized data from 198 African American children at an urban elementary school. Both ex-post facto and experimental designs were employed. Experimental conditions were diagnostic and non-diagnostic testing experiences. The qualitative phase utilized data from a series of six focus group interviews conducted with a purposefully selected group of 4 African American children. The interview data were supplemented with data from 30 hours of classroom observations. Quantitative findings indicated that the stereotype threat condition evoked by diagnostic testing depresses the reading test performance of stereotype-aware African American children (F[1, 194] = 2.21, p < .01). This was particularly true of students who are most highly domain-identified with reading (F[1, 91] = 19.18, p < .01). Moreover, findings indicated that only stereotype-aware African American children who were highly domain-identified were more likely to experience anxiety in the diagnostic condition (F[1, 91] = 5.97, p < .025). Qualitative findings revealed 4 themes regarding how African American children perceive and experience the factors related to stereotype threat: (1) a narrow perception of education as strictly test preparation, (2) feelings of stress and anxiety related to the state test, (3) concern with what “others” think (racial salience), and (4) stereotypes. A new conceptual model for stereotype threat is presented, and future directions including implications for practice and policy are discussed.
Resumo:
The adverse health effects of long-term exposure to lead are well established, with major uptake into the human body occurring mainly through oral ingestion by young children. Lead-based paint was frequently used in homes built before 1978, particularly in inner-city areas. Minority populations experience the effects of lead poisoning disproportionately. ^ Lead-based paint abatement is costly. In the United States, residents of about 400,000 homes, occupied by 900,000 young children, lack the means to correct lead-based paint hazards. The magnitude of this problem demands research on affordable methods of hazard control. One method is encapsulation, defined as any covering or coating that acts as a permanent barrier between the lead-based paint surface and the environment. ^ Two encapsulants were tested for reliability and effective life span through an accelerated lifetime experiment that applied stresses exceeding those encountered under normal use conditions. The resulting time-to-failure data were used to extrapolate the failure time under conditions of normal use. Statistical analysis and models of the test data allow forecasting of long-term reliability relative to the 20-year encapsulation requirement. Typical housing material specimens simulating walls and doors coated with lead-based paint were overstressed before encapsulation. A second, un-aged set was also tested. Specimens were monitored after the stress test with a surface chemical testing pad to identify the presence of lead breaking through the encapsulant. ^ Graphical analysis proposed by Shapiro and Meeker and the general log-linear model developed by Cox were used to obtain results. Findings for the 80% reliability time to failure varied, with close to 21 years of life under normal use conditions for encapsulant A. The application of product A on the aged gypsum and aged wood substrates yielded slightly lower times. Encapsulant B had an 80% reliable life of 19.78 years. ^ This study reveals that encapsulation technologies can offer safe and effective control of lead-based paint hazards and may be less expensive than other options. The U.S. Department of Health and Human Services and the CDC are committed to eliminating childhood lead poisoning by 2010. This ambitious target is feasible, provided there is an efficient application of innovative technology, a goal to which this study aims to contribute. ^
Resumo:
Non-Destructive Testing (NDT) of deep foundations has become an integral part of the industry's standard manufacturing processes. It is not unusual for the evaluation of the integrity of the concrete to include the measurement of ultrasonic wave speeds. Numerous methods have been proposed that use the propagation speed of ultrasonic waves to check the integrity of concrete for drilled shaft foundations. All such methods evaluate the integrity of the concrete inside the cage and between the access tubes. The integrity of the concrete outside the cage remains to be considered to determine the location of the border between the concrete and the soil in order to obtain the diameter of the drilled shaft. It is also economic to devise a methodology to obtain the diameter of the drilled shaft using the Cross-Hole Sonic Logging system (CSL). Performing such a methodology using the CSL and following the CSL tests is performed and used to check the integrity of the inside concrete, thus allowing the determination of the drilled shaft diameter without having to set up another NDT device.^ This proposed new method is based on the installation of galvanized tubes outside the shaft across from each inside tube, and performing the CSL test between the inside and outside tubes. From the performed experimental work a model is developed to evaluate the relationship between the thickness of concrete and the ultrasonic wave properties using signal processing. The experimental results show that there is a direct correlation between concrete thicknesses outside the cage and maximum amplitude of the received signal obtained from frequency domain data. This study demonstrates how this new method to measuring the diameter of drilled shafts during construction using a NDT method overcomes the limitations of currently-used methods. ^ In the other part of study, a new method is proposed to visualize and quantify the extent and location of the defects. It is based on a color change in the frequency amplitude of the signal recorded by the receiver probe in the location of defects and it is called Frequency Tomography Analysis (FTA). Time-domain data is transferred to frequency-domain data of the signals propagated between tubes using Fast Fourier Transform (FFT). Then, distribution of the FTA will be evaluated. This method is employed after CSL has determined the high probability of an anomaly in a given area and is applied to improve location accuracy and to further characterize the feature. The technique has a very good resolution and clarifies the exact depth location of any void or defect through the length of the drilled shaft for the voids inside the cage. ^ The last part of study also evaluates the effect of voids inside and outside the reinforcement cage and corrosion in the longitudinal bars on the strength and axial load capacity of drilled shafts. The objective is to quantify the extent of loss in axial strength and stiffness of drilled shafts due to presence of different types of symmetric voids and corrosion throughout their lengths.^
Resumo:
Low-rise buildings are often subjected to high wind loads during hurricanes that lead to severe damage and cause water intrusion. It is therefore important to estimate accurate wind pressures for design purposes to reduce losses. Wind loads on low-rise buildings can differ significantly depending upon the laboratory in which they were measured. The differences are due in large part to inadequate simulations of the low-frequency content of atmospheric velocity fluctuations in the laboratory and to the small scale of the models used for the measurements. A new partial turbulence simulation methodology was developed for simulating the effect of low-frequency flow fluctuations on low-rise buildings more effectively from the point of view of testing accuracy and repeatability than is currently the case. The methodology was validated by comparing aerodynamic pressure data for building models obtained in the open-jet 12-Fan Wall of Wind (WOW) facility against their counterparts in a boundary-layer wind tunnel. Field measurements of pressures on Texas Tech University building and Silsoe building were also used for validation purposes. The tests in partial simulation are freed of integral length scale constraints, meaning that model length scales in such testing are only limited by blockage considerations. Thus the partial simulation methodology can be used to produce aerodynamic data for low-rise buildings by using large-scale models in wind tunnels and WOW-like facilities. This is a major advantage, because large-scale models allow for accurate modeling of architectural details, testing at higher Reynolds number, using greater spatial resolution of the pressure taps in high pressure zones, and assessing the performance of aerodynamic devices to reduce wind effects. The technique eliminates a major cause of discrepancies among measurements conducted in different laboratories and can help to standardize flow simulations for testing residential homes as well as significantly improving testing accuracy and repeatability. Partial turbulence simulation was used in the WOW to determine the performance of discontinuous perforated parapets in mitigating roof pressures. The comparisons of pressures with and without parapets showed significant reductions in pressure coefficients in the zones with high suctions. This demonstrated the potential of such aerodynamic add-on devices to reduce uplift forces.
Resumo:
Ensemble Stream Modeling and Data-cleaning are sensor information processing systems have different training and testing methods by which their goals are cross-validated. This research examines a mechanism, which seeks to extract novel patterns by generating ensembles from data. The main goal of label-less stream processing is to process the sensed events to eliminate the noises that are uncorrelated, and choose the most likely model without over fitting thus obtaining higher model confidence. Higher quality streams can be realized by combining many short streams into an ensemble which has the desired quality. The framework for the investigation is an existing data mining tool. First, to accommodate feature extraction such as a bush or natural forest-fire event we make an assumption of the burnt area (BA*), sensed ground truth as our target variable obtained from logs. Even though this is an obvious model choice the results are disappointing. The reasons for this are two: One, the histogram of fire activity is highly skewed. Two, the measured sensor parameters are highly correlated. Since using non descriptive features does not yield good results, we resort to temporal features. By doing so we carefully eliminate the averaging effects; the resulting histogram is more satisfactory and conceptual knowledge is learned from sensor streams. Second is the process of feature induction by cross-validating attributes with single or multi-target variables to minimize training error. We use F-measure score, which combines precision and accuracy to determine the false alarm rate of fire events. The multi-target data-cleaning trees use information purity of the target leaf-nodes to learn higher order features. A sensitive variance measure such as ƒ-test is performed during each node's split to select the best attribute. Ensemble stream model approach proved to improve when using complicated features with a simpler tree classifier. The ensemble framework for data-cleaning and the enhancements to quantify quality of fitness (30% spatial, 10% temporal, and 90% mobility reduction) of sensor led to the formation of streams for sensor-enabled applications. Which further motivates the novelty of stream quality labeling and its importance in solving vast amounts of real-time mobile streams generated today.