950 resultados para Model accuracy


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The objective of this study was to investigate the potential application of mid-infrared spectroscopy for determination of selected sensory attributes in a range of experimentally manufactured processed cheese samples. This study also evaluates mid-infrared spectroscopy against other recently proposed techniques for predicting sensory texture attributes. Processed cheeses (n = 32) of varying compositions were manufactured on a pilot scale. After 2 and 4 wk of storage at 4 degrees C, mid-infrared spectra ( 640 to 4,000 cm(-1)) were recorded and samples were scored on a scale of 0 to 100 for 9 attributes using descriptive sensory analysis. Models were developed by partial least squares regression using raw and pretreated spectra. The mouth-coating and mass-forming models were improved by using a reduced spectral range ( 930 to 1,767 cm(-1)). The remaining attributes were most successfully modeled using a combined range ( 930 to 1,767 cm(-1) and 2,839 to 4,000 cm(-1)). The root mean square errors of cross-validation for the models were 7.4(firmness; range 65.3), 4.6 ( rubbery; range 41.7), 7.1 ( creamy; range 60.9), 5.1(chewy; range 43.3), 5.2(mouth-coating; range 37.4), 5.3 (fragmentable; range 51.0), 7.4 ( melting; range 69.3), and 3.1 (mass-forming; range 23.6). These models had a good practical utility. Model accuracy ranged from approximate quantitative predictions to excellent predictions ( range error ratio = 9.6). In general, the models compared favorably with previously reported instrumental texture models and near-infrared models, although the creamy, chewy, and melting models were slightly weaker than the previously reported near-infrared models. We concluded that mid-infrared spectroscopy could be successfully used for the nondestructive and objective assessment of processed cheese sensory quality..

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Paraguay River is the main tributary of the Paraná River and has an extension of 1.693 km in Brazilian territory. The navigability conditions are very important for the regional economy because most of the central-west Brazilian agricultural and mineral production is transported by the Paraguay waterway. Increased sedimentation along the channel requires continuous dredging to waterway maintenance. Systematic bathymetric surveys are periodically carried out in order to check depth condition along the channel using echo-sounding devices. In this paper, digital image processing and geostatistical analysis methods were used to analyze the applicability of the ASTER sensor to estimate channel depths in a segment of the upper Paraguay River. The results were compared with field data in order to choose the band with better adjustment and to evaluate the standard deviation. Comparing the VNIR bands, the best fit was presented by the red wavelength (band 2; 0,63 - 0,69 μm), showing a good representation of the channel depths shallow than 1,7 m. Applying geostatistical methods, the model accuracy was enhanced from 43 cm to 36 cm and undesired components were slacked. It was concluded that the digital number of band 2, converted to bathymetry information allows a good estimation of river depths and channel morphology.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this study, the effect of time derivatives of flow rate and rotational speed was investigated on the mathematical modeling of a rotary blood pump (RBP). The basic model estimates the pressure head of the pump as a dependent variable using measured flow and speed as predictive variables. Performance of the model was evaluated by adding time derivative terms for flow and speed. First, to create a realistic working condition, the Levitronix CentriMag RBP was implanted in a sheep. All parameters from the model were physically measured and digitally acquired over a wide range of conditions, including pulsatile speed. Second, a statistical analysis of the different variables (flow, speed, and their time derivatives) based on multiple regression analysis was performed to determine the significant variables for pressure head estimation. Finally, different mathematical models were used to show the effect of time derivative terms on the performance of the models. In order to evaluate how well the estimated pressure head using different models fits the measured pressure head, root mean square error and correlation coefficient were used. The results indicate that inclusion of time derivatives of flow and speed can improve model accuracy, but only minimally.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The municipality of San Juan La Laguna, Guatemala is home to approximately 5,200 people and located on the western side of the Lake Atitlán caldera. Steep slopes surround all but the eastern side of San Juan. The Lake Atitlán watershed is susceptible to many natural hazards, but most predictable are the landslides that can occur annually with each rainy season, especially during high-intensity events. Hurricane Stan hit Guatemala in October 2005; the resulting flooding and landslides devastated the Atitlán region. Locations of landslide and non-landslide points were obtained from field observations and orthophotos taken following Hurricane Stan. This study used data from multiple attributes, at every landslide and non-landslide point, and applied different multivariate analyses to optimize a model for landslides prediction during high-intensity precipitation events like Hurricane Stan. The attributes considered in this study are: geology, geomorphology, distance to faults and streams, land use, slope, aspect, curvature, plan curvature, profile curvature and topographic wetness index. The attributes were pre-evaluated for their ability to predict landslides using four different attribute evaluators, all available in the open source data mining software Weka: filtered subset, information gain, gain ratio and chi-squared. Three multivariate algorithms (decision tree J48, logistic regression and BayesNet) were optimized for landslide prediction using different attributes. The following statistical parameters were used to evaluate model accuracy: precision, recall, F measure and area under the receiver operating characteristic (ROC) curve. The algorithm BayesNet yielded the most accurate model and was used to build a probability map of landslide initiation points. The probability map developed in this study was also compared to the results of a bivariate landslide susceptibility analysis conducted for the watershed, encompassing Lake Atitlán and San Juan. Landslides from Tropical Storm Agatha 2010 were used to independently validate this study’s multivariate model and the bivariate model. The ultimate aim of this study is to share the methodology and results with municipal contacts from the author's time as a U.S. Peace Corps volunteer, to facilitate more effective future landslide hazard planning and mitigation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Maximizing data quality may be especially difficult in trauma-related clinical research. Strategies are needed to improve data quality and assess the impact of data quality on clinical predictive models. This study had two objectives. The first was to compare missing data between two multi-center trauma transfusion studies: a retrospective study (RS) using medical chart data with minimal data quality review and the PRospective Observational Multi-center Major Trauma Transfusion (PROMMTT) study with standardized quality assurance. The second objective was to assess the impact of missing data on clinical prediction algorithms by evaluating blood transfusion prediction models using PROMMTT data. RS (2005-06) and PROMMTT (2009-10) investigated trauma patients receiving ≥ 1 unit of red blood cells (RBC) from ten Level I trauma centers. Missing data were compared for 33 variables collected in both studies using mixed effects logistic regression (including random intercepts for study site). Massive transfusion (MT) patients received ≥ 10 RBC units within 24h of admission. Correct classification percentages for three MT prediction models were evaluated using complete case analysis and multiple imputation based on the multivariate normal distribution. A sensitivity analysis for missing data was conducted to estimate the upper and lower bounds of correct classification using assumptions about missing data under best and worst case scenarios. Most variables (17/33=52%) had <1% missing data in RS and PROMMTT. Of the remaining variables, 50% demonstrated less missingness in PROMMTT, 25% had less missingness in RS, and 25% were similar between studies. Missing percentages for MT prediction variables in PROMMTT ranged from 2.2% (heart rate) to 45% (respiratory rate). For variables missing >1%, study site was associated with missingness (all p≤0.021). Survival time predicted missingness for 50% of RS and 60% of PROMMTT variables. MT models complete case proportions ranged from 41% to 88%. Complete case analysis and multiple imputation demonstrated similar correct classification results. Sensitivity analysis upper-lower bound ranges for the three MT models were 59-63%, 36-46%, and 46-58%. Prospective collection of ten-fold more variables with data quality assurance reduced overall missing data. Study site and patient survival were associated with missingness, suggesting that data were not missing completely at random, and complete case analysis may lead to biased results. Evaluating clinical prediction model accuracy may be misleading in the presence of missing data, especially with many predictor variables. The proposed sensitivity analysis estimating correct classification under upper (best case scenario)/lower (worst case scenario) bounds may be more informative than multiple imputation, which provided results similar to complete case analysis.^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In recent years, the increasing sophistication of embedded multimedia systems and wireless communication technologies has promoted a widespread utilization of video streaming applications. It has been reported in 2013 that youngsters, aged between 13 and 24, spend around 16.7 hours a week watching online video through social media, business websites, and video streaming sites. Video applications have already been blended into people daily life. Traditionally, video streaming research has focused on performance improvement, namely throughput increase and response time reduction. However, most mobile devices are battery-powered, a technology that grows at a much slower pace than either multimedia or hardware developments. Since battery developments cannot satisfy expanding power demand of mobile devices, research interests on video applications technology has attracted more attention to achieve energy-efficient designs. How to efficiently use the limited battery energy budget becomes a major research challenge. In addition, next generation video standards impel to diversification and personalization. Therefore, it is desirable to have mechanisms to implement energy optimizations with greater flexibility and scalability. In this context, the main goal of this dissertation is to find an energy management and optimization mechanism to reduce the energy consumption of video decoders based on the idea of functional-oriented reconfiguration. System battery life is prolonged as the result of a trade-off between energy consumption and video quality. Functional-oriented reconfiguration takes advantage of the similarities among standards to build video decoders reconnecting existing functional units. If a feedback channel from the decoder to the encoder is available, the former can signal the latter changes in either the encoding parameters or the encoding algorithms for energy-saving adaption. The proposed energy optimization and management mechanism is carried out at the decoder end. This mechanism consists of an energy-aware manager, implemented as an additional block of the reconfiguration engine, an energy estimator, integrated into the decoder, and, if available, a feedback channel connected to the encoder end. The energy-aware manager checks the battery level, selects the new decoder description and signals to build a new decoder to the reconfiguration engine. It is worth noting that the analysis of the energy consumption is fundamental for the success of the energy management and optimization mechanism. In this thesis, an energy estimation method driven by platform event monitoring is proposed. In addition, an event filter is suggested to automate the selection of the most appropriate events that affect the energy consumption. At last, a detailed study on the influence of the training data on the model accuracy is presented. The modeling methodology of the energy estimator has been evaluated on different underlying platforms, single-core and multi-core, with different characteristics of workload. All the results show a good accuracy and low on-line computation overhead. The required modifications on the reconfiguration engine to implement the energy-aware manager have been assessed under different scenarios. The results indicate a possibility to lengthen the battery lifetime of the system in two different use-cases.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Predicting failures in a distributed system based on previous events through logistic regression is a standard approach in literature. This technique is not reliable, though, in two situations: in the prediction of rare events, which do not appear in enough proportion for the algorithm to capture, and in environments where there are too many variables, as logistic regression tends to overfit on this situations; while manually selecting a subset of variables to create the model is error- prone. On this paper, we solve an industrial research case that presented this situation with a combination of elastic net logistic regression, a method that allows us to automatically select useful variables, a process of cross-validation on top of it and the application of a rare events prediction technique to reduce computation time. This process provides two layers of cross- validation that automatically obtain the optimal model complexity and the optimal mode l parameters values, while ensuring even rare events will be correctly predicted with a low amount of training instances. We tested this method against real industrial data, obtaining a total of 60 out of 80 possible models with a 90% average model accuracy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A major challenge of modern teams lies in the coordination of the efforts not just of individuals within a team, but also of teams whose efforts are ultimately entwined with those of other teams. Despite this fact, much of the research on work teams fails to consider the external dependencies that exist in organizational teams and instead focuses on internal or within team processes. Multi-Team Systems Theory is used as a theoretical framework for understanding teams-of-teams organizational forms (Multi-Team Systems; MTS's); and leadership teams are proposed as one remedy that enable MTS members to dedicate needed resources to intra-team activities while ensuring effective synchronization of between-team activities. Two functions of leader teams were identified: strategy development and coordination facilitation; and a model was developed delineating the effects of the two leader roles on multi-team cognitions, processes, and performance.^ Three hundred eighty-four undergraduate psychology and business students participated in a laboratory simulation that modeled an MTS; each MTS was comprised of three, two-member teams each performing distinct but interdependent components of an F-22 battle simulation task. Two roles of leader teams supported in the literature were manipulated through training in a 2 (strategy training vs. control) x 2 (coordination training vs. control) design. Multivariate analysis of variance (MANOVA) and mediated regression analysis were used to test the study's hypotheses. ^ Results indicate that both training manipulations produced differences in the effectiveness of the intended form of leader behavior. The enhanced leader strategy training resulted in more accurate (but not more similar) MTS mental models, better inter-team coordination, and higher levels of multi-team (but not component team) performance. Moreover, mental model accuracy fully mediated the relationship between leader strategy and inter-team coordination; and inter-team coordination fully mediated the effect of leader strategy on multi-team performance. Leader coordination training led to better inter-team coordination, but not to higher levels of either team or multi-team performance. Mediated Input-Process-Output (I-P-O) relationships were not supported with leader coordination; rather, leader coordination facilitation and inter-team coordination uniquely contributed to component team and multi-team level performance. The implications of these findings and future research directions are also discussed. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Annual mean salinity, light availability, and sediment depth to bedrock structured the submerged aquatic vegetation (SAV) communities in subtropical mangrove-lined estuaries. Three distinct SAV communities (i.e., Chara group, Halodule group, and Low SAV coverage group) were identified along the Everglades–Florida Bay ecotone and related to water quality using a discriminant function model that predicted the type of plant community at a given site from salinity, light availability, and sediment depth to bedrock. Mean salinity alone was able to correctly classify 78% of the sites and reliably separated the Chara group from the Halodule group. The addition of light availability and sediment depth to bedrock increased model accuracy to 90% and further distinguished the Chara group from the Halodule group. Light availability was uniquely valuable in separating the Chara group from the Low SAV coverage group. Regression analyses identified significant relationships between phosphorus concentration, phytoplankton abundance, and light availability and suggest that a decline in water transparency, associated with increasing salinity, may have also contributed to the historical decline of Chara communities in the region. This investigation applies relationships between environmental variables and SAV distribution and provides a case study into the application of these general principals to ecosystem management.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recently, researchers have begun to investigate the benefits of cross-training teams. It has been hypothesized that cross-training should help improve team processes and team performance (Cannon-Bowers, Salas, Blickensderfer, & Bowers, 1998; Travillian, Volpe, Cannon-Bowers, & Salas, 1993). The current study extends previous research by examining different methods of cross-training (positional clarification and positional modeling) and the impact they have on team process and performance in both more complex and less complex environments. One hundred and thirty-five psychology undergraduates were placed in 45 three-person teams. Participants were randomly assigned to roles within teams. Teams were asked to “fly” a series of missions on a PC-based helicopter flight simulation. ^ Results suggest that cross-training improves team mental model accuracy and similarity. Accuracy of team mental models was found to be a predictor of coordination quality, but similarity of team mental models was not. Neither similarity nor accuracy of team mental models was found to be a predictor of backup behavior (quality and quantity). As expected, both team coordination (quality) and backup behaviors (quantity and quality) were significant predictors of overall team performance. Contrary to expectations, there was no interaction between cross-training and environmental complexity. Results from this study further cross-training research by establishing positional clarification and positional modeling as training strategies for improving team performance. ^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Marine mammals exploit the efficiency of sound propagation in the marine environment for essential activities like communication and navigation. For this reason, passive acoustics has particularly high potential for marine mammal studies, especially those aimed at population management and conservation. Despite the rapid realization of this potential through a growing number of studies, much crucial information remains unknown or poorly understood. This research attempts to address two key knowledge gaps, using the well-studied bottlenose dolphin (Tursiops truncatus) as a model species, and underwater acoustic recordings collected on four fixed autonomous sensors deployed at multiple locations in Sarasota Bay, Florida, between September 2012 and August 2013. Underwater noise can hinder dolphin communication. The ability of these animals to overcome this obstacle was examined using recorded noise and dolphin whistles. I found that bottlenose dolphins are able to compensate for increased noise in their environment using a wide range of strategies employed in a singular fashion or in various combinations, depending on the frequency content of the noise, noise source, and time of day. These strategies include modifying whistle frequency characteristics, increasing whistle duration, and increasing whistle redundancy. Recordings were also used to evaluate the performance of six recently developed passive acoustic abundance estimation methods, by comparing their results to the true abundance of animals, obtained via a census conducted within the same area and time period. The methods employed were broadly divided into two categories – those involving direct counts of animals, and those involving counts of cues (signature whistles). The animal-based methods were traditional capture-recapture, spatially explicit capture-recapture (SECR), and an approach that blends the “snapshot” method and mark-recapture distance sampling, referred to here as (SMRDS). The cue-based methods were conventional distance sampling (CDS), an acoustic modeling approach involving the use of the passive sonar equation, and SECR. In the latter approach, detection probability was modelled as a function of sound transmission loss, rather than the Euclidean distance typically used. Of these methods, while SMRDS produced the most accurate estimate, SECR demonstrated the greatest potential for broad applicability to other species and locations, with minimal to no auxiliary data, such as distance from sound source to detector(s), which is often difficult to obtain. This was especially true when this method was compared to traditional capture-recapture results, which greatly underestimated abundance, despite attempts to account for major unmodelled heterogeneity. Furthermore, the incorporation of non-Euclidean distance significantly improved model accuracy. The acoustic modelling approach performed similarly to CDS, but both methods also strongly underestimated abundance. In particular, CDS proved to be inefficient. This approach requires at least 3 sensors for localization at a single point. It was also difficult to obtain accurate distances, and the sample size was greatly reduced by the failure to detect some whistles on all three recorders. As a result, this approach is not recommended for marine mammal abundance estimation when few recorders are available, or in high sound attenuation environments with relatively low sample sizes. It is hoped that these results lead to more informed management decisions, and therefore, more effective species conservation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Climate may affect broiler production, especially where there are heat waves, which may cause high mortality rates due to the heat stress. Heat wave prediction and characterization may allow early mitigation actions to be taken. Data Mining is one of the tools used for such a characterization, particularly when a large number of variables is involved. The objective of this study was to classify heat waves that promote broiler chicken mortality in poultry houses equipped with minimal environmental control. A single day of heat, a heat-shock day, is capable of producing high broiler mortality. In poultry houses equipped with fans and evaporative cooling, the characterization of heat waves affecting broiler mortality between 29 days of age and market age presented 89.34% Model Accuracy and 0.73 Class Precision for high mortality. There was no influence on high mortality (HM) of birds between 29 and 31 days of age. Maximum temperature humidity index (THI) above 30.6 ºC was the main characteristic of days when there was a heat wave, causing high mortality in broilers older than 31 days. The high mortality of broilers between 31 and 40 days of age occurred when maximum THI was above 30.6 ºC and maximum temperature of the day was above 34.4 ºC. There were two main causes of high mortality of broilers older than 40 days: 1) maximum THI above 30.6 ºC and minimum THI equal or lower than 15.5 ºC; 2) maximum THI above 30.6 ºC, minimum THI lower than 15.5 ºC, and the time of maximum temperature later than 15:00h. The heat wave influence on broiler mortality lasted an average of 2.7 days.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Literature presents a huge number of different simulations of gas-solid flows in risers applying two-fluid modeling. In spite of that, the related quantitative accuracy issue remains mostly untouched. This state of affairs seems to be mainly a consequence of modeling shortcomings, notably regarding the lack of realistic closures. In this article predictions from a two-fluid model are compared to other published two-fluid model predictions applying the same Closures, and to experimental data. A particular matter of concern is whether the predictions are generated or not inside the statistical steady state regime that characterizes the riser flows. The present simulation was performed inside the statistical steady state regime. Time-averaged results are presented for different time-averaging intervals of 5, 10, 15 and 20 s inside the statistical steady state regime. The independence of the averaged results regarding the time-averaging interval is addressed and the results averaged over the intervals of 10 and 20 s are compared to both experiment and other two-fluid predictions. It is concluded that the two-fluid model used is still very crude, and cannot provide quantitative accurate results, at least for the particular case that was considered. (C) 2009 Elsevier Inc. All rights reserved.