884 resultados para Climate-Leaf Analysis Multivariate Program (CLAMP) (Wolfe, 1993)
Resumo:
Purpose: Spontaneous eye blink activity in the primary eye position and its relationship to age and gender were assessed using digital image processing techniques to quantify blink opening and closing time. Methods: One hundred-and-eighty healthy volunteers (90 males and 90 females), divided into the age groups 0-3, 4-12, 13-20, 21-40, 41-60 and ≥60 years old, were evaluated prospectively. They were videotaped digitally in a standard setting and the images were transferred to a personal computer (Macintosh 400) and processed with the iMovie software. Blink opening and closing time were measured at 30 frames/second. The data were then subjected to statistical analysis. Results: The closing time was significantly longer than the opening time for all ages and both genders. Elderly individuals (≥41 years old) and women had significantly longer closing times. Conclusion: Image processing techniques made possible the observation of differences in spontaneous eye blink opening and closing time in relation to age and gender. Copyright © 2005 Taylor & Francis LLC.
Resumo:
Computer programs enable the transformation of raw data into useful information for decision making in many fields, including agriculture. Various programs have been developed to assist farmers to make better decisions about crop management practices and plant nutrition parameters. This article introduces the CND-Goiaba 1.0 software (C Sharp) and its use as a tool to perform the mathematical calculations involved in determining the compositional nutrient diagnosis (CND) indexes for the guava tree. This program was developed in Brazil, the world's leading producer of red guavas. A database was created based on 205 leaf samples collected in commercial plots (sampling units) of cultivated 'Paluma' guava trees (Psidium guajava L.) with ages between 5 and 20 years, during the 2009-2010 and 2010-2011 growing seasons. The production data were normally distributed according to the Shapiro-Wilk test (W=0.988; p=0.11). The software made it possible to diagnose that 63% of the orchards evaluated needed to improve the nutritional status of their trees. The CND method showed severe nutritional imbalances in Mg and Zn in these orchards. © ISHS.
Resumo:
Different procedures for the integrated method of diagnosis and recommendation may influence the accuracy of foliar diagnosis of mango. Thus, the objective of this study was to compare the diagnosis of nutritional status of mango obtained by different methods of assessing nutritional status, comparing multivariate relations (method CND) with bivariate relationships (DRIS) i.e., variations in the use of specific or preliminary standards and a logarithmic transformation of the data. The macro and micronutrient leaf analysis results of 63 mango orchards of Lower-middle São Francisco River Valley, Brazil, were used. To interpret nutritional status the Potential Response to Fertilization (PRA) criteria was used. The CND and DRIS methods, with and without logarithmic transformation and using specific or preliminary standards, which showed similar performance in assessing the nutritional status of the mango in Lower-middle San Francisco. Mango orchards in the semiarid region of northeast Brazil, the micronutrient deficiencies (Zn, Fe and Cu) were more frequent than macronutrient deficiency.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Efficiency in the use of genetic variability, whether existing or created, increases when properly explored and analysed. Incorporation of biotechnology into breeding programs has been the general practice. The challenge for the researcher is the constant development of new and improved cultivars. The aim of this experiment was to select progenies with superior characteristics, whether or not carriers of the RR gene, derived from bi-parental crosses in the soybean, with the help of multivariate techniques. The experiment was carried out in a family-type experimental design, including controls, during the agricultural year 2010/2011 and 2011/2012 in Jaboticabal in the Brazilian State of São Paulo. From the F3 generation, phenotypically superior plants were selected, which were evaluated for the following traits: number of days to flowering; number of days to maturity; height of first pod insertion; plant height at maturity; lodging; agronomic value; number of branches; number of pods per plant; 100-seed weight; number of seeds per plant; grain yield per plant. Given the results, it appears possible to select superior progeny by principal component analysis. Cluster analysis using the K-means method links progeny according to the most important characteristics in each group and identifies, by the Ward method and by means of a dendrogram, the structure of similarity and divergence between selected progeny. Both methods are effective in aiding progeny selection.
Resumo:
Studies of diagnostic accuracy require more sophisticated methods for their meta-analysis than studies of therapeutic interventions. A number of different, and apparently divergent, methods for meta-analysis of diagnostic studies have been proposed, including two alternative approaches that are statistically rigorous and allow for between-study variability: the hierarchical summary receiver operating characteristic (ROC) model (Rutter and Gatsonis, 2001) and bivariate random-effects meta-analysis (van Houwelingen and others, 1993), (van Houwelingen and others, 2002), (Reitsma and others, 2005). We show that these two models are very closely related, and define the circumstances in which they are identical. We discuss the different forms of summary model output suggested by the two approaches, including summary ROC curves, summary points, confidence regions, and prediction regions.
Resumo:
Simulations of forest stand dynamics in a modelling framework including Forest Vegetation Simulator (FVS) are diameter driven, thus the diameter or basal area increment model needs a special attention. This dissertation critically evaluates diameter or basal area increment models and modelling approaches in the context of the Great Lakes region of the United States and Canada. A set of related studies are presented that critically evaluate the sub-model for change in individual tree basal diameter used in the Forest Vegetation Simulator (FVS), a dominant forestry model in the Great Lakes region. Various historical implementations of the STEMS (Stand and Tree Evaluation and Modeling System) family of diameter increment models, including the current public release of the Lake States variant of FVS (LS-FVS), were tested for the 30 most common tree species using data from the Michigan Forest Inventory and Analysis (FIA) program. The results showed that current public release of the LS-FVS diameter increment model over-predicts 10-year diameter increment by 17% on average. Also the study affirms that a simple adjustment factor as a function of a single predictor, dbh (diameter at breast height) used in the past versions, provides an inadequate correction of model prediction bias. In order to re-engineer the basal diameter increment model, the historical, conceptual and philosophical differences among the individual tree increment model families and their modelling approaches were analyzed and discussed. Two underlying conceptual approaches toward diameter or basal area increment modelling have been often used: the potential-modifier (POTMOD) and composite (COMP) approaches, which are exemplified by the STEMS/TWIGS and Prognosis models, respectively. It is argued that both approaches essentially use a similar base function and neither is conceptually different from a biological perspective, even though they look different in their model forms. No matter what modelling approach is used, the base function is the foundation of an increment model. Two base functions – gamma and Box-Lucas – were identified as candidate base functions for forestry applications. The results of a comparative analysis of empirical fits showed that quality of fit is essentially similar, and both are sufficiently detailed and flexible for forestry applications. The choice of either base function in order to model diameter or basal area increment is dependent upon personal preference; however, the gamma base function may be preferred over the Box-Lucas, as it fits the periodic increment data in both a linear and nonlinear composite model form. Finally, the utility of site index as a predictor variable has been criticized, as it has been widely used in models for complex, mixed species forest stands though not well suited for this purpose. An alternative to site index in an increment model was explored, using site index and a combination of climate variables and Forest Ecosystem Classification (FEC) ecosites and data from the Province of Ontario, Canada. The results showed that a combination of climate and FEC ecosites variables can replace site index in the diameter increment model.
Resumo:
The assessment of the thermal bioclimate is based on the human energy balance and derived indices such as Physiologically equivalent temperature (Pet) or Universal thermal Climate index (UtCi). These two indices were compared over a period often year based on hourly data in a middle european city with a temperate climate. The analysis performed shows that the differences obtained result from the different thermo-physiological settings of clothing insulation. For conditions with extremely high vapour pressure values, UtCi yields higher values than Pet, which could describe the thermo-physiological stress more appropriately.
Resumo:
The first manuscript, entitled "Time-Series Analysis as Input for Clinical Predictive Modeling: Modeling Cardiac Arrest in a Pediatric ICU" lays out the theoretical background for the project. There are several core concepts presented in this paper. First, traditional multivariate models (where each variable is represented by only one value) provide single point-in-time snapshots of patient status: they are incapable of characterizing deterioration. Since deterioration is consistently identified as a precursor to cardiac arrests, we maintain that the traditional multivariate paradigm is insufficient for predicting arrests. We identify time series analysis as a method capable of characterizing deterioration in an objective, mathematical fashion, and describe how to build a general foundation for predictive modeling using time series analysis results as latent variables. Building a solid foundation for any given modeling task involves addressing a number of issues during the design phase. These include selecting the proper candidate features on which to base the model, and selecting the most appropriate tool to measure them. We also identified several unique design issues that are introduced when time series data elements are added to the set of candidate features. One such issue is in defining the duration and resolution of time series elements required to sufficiently characterize the time series phenomena being considered as candidate features for the predictive model. Once the duration and resolution are established, there must also be explicit mathematical or statistical operations that produce the time series analysis result to be used as a latent candidate feature. In synthesizing the comprehensive framework for building a predictive model based on time series data elements, we identified at least four classes of data that can be used in the model design. The first two classes are shared with traditional multivariate models: multivariate data and clinical latent features. Multivariate data is represented by the standard one value per variable paradigm and is widely employed in a host of clinical models and tools. These are often represented by a number present in a given cell of a table. Clinical latent features derived, rather than directly measured, data elements that more accurately represent a particular clinical phenomenon than any of the directly measured data elements in isolation. The second two classes are unique to the time series data elements. The first of these is the raw data elements. These are represented by multiple values per variable, and constitute the measured observations that are typically available to end users when they review time series data. These are often represented as dots on a graph. The final class of data results from performing time series analysis. This class of data represents the fundamental concept on which our hypothesis is based. The specific statistical or mathematical operations are up to the modeler to determine, but we generally recommend that a variety of analyses be performed in order to maximize the likelihood that a representation of the time series data elements is produced that is able to distinguish between two or more classes of outcomes. The second manuscript, entitled "Building Clinical Prediction Models Using Time Series Data: Modeling Cardiac Arrest in a Pediatric ICU" provides a detailed description, start to finish, of the methods required to prepare the data, build, and validate a predictive model that uses the time series data elements determined in the first paper. One of the fundamental tenets of the second paper is that manual implementations of time series based models are unfeasible due to the relatively large number of data elements and the complexity of preprocessing that must occur before data can be presented to the model. Each of the seventeen steps is analyzed from the perspective of how it may be automated, when necessary. We identify the general objectives and available strategies of each of the steps, and we present our rationale for choosing a specific strategy for each step in the case of predicting cardiac arrest in a pediatric intensive care unit. Another issue brought to light by the second paper is that the individual steps required to use time series data for predictive modeling are more numerous and more complex than those used for modeling with traditional multivariate data. Even after complexities attributable to the design phase (addressed in our first paper) have been accounted for, the management and manipulation of the time series elements (the preprocessing steps in particular) are issues that are not present in a traditional multivariate modeling paradigm. In our methods, we present the issues that arise from the time series data elements: defining a reference time; imputing and reducing time series data in order to conform to a predefined structure that was specified during the design phase; and normalizing variable families rather than individual variable instances. The final manuscript, entitled: "Using Time-Series Analysis to Predict Cardiac Arrest in a Pediatric Intensive Care Unit" presents the results that were obtained by applying the theoretical construct and its associated methods (detailed in the first two papers) to the case of cardiac arrest prediction in a pediatric intensive care unit. Our results showed that utilizing the trend analysis from the time series data elements reduced the number of classification errors by 73%. The area under the Receiver Operating Characteristic curve increased from a baseline of 87% to 98% by including the trend analysis. In addition to the performance measures, we were also able to demonstrate that adding raw time series data elements without their associated trend analyses improved classification accuracy as compared to the baseline multivariate model, but diminished classification accuracy as compared to when just the trend analysis features were added (ie, without adding the raw time series data elements). We believe this phenomenon was largely attributable to overfitting, which is known to increase as the ratio of candidate features to class examples rises. Furthermore, although we employed several feature reduction strategies to counteract the overfitting problem, they failed to improve the performance beyond that which was achieved by exclusion of the raw time series elements. Finally, our data demonstrated that pulse oximetry and systolic blood pressure readings tend to start diminishing about 10-20 minutes before an arrest, whereas heart rates tend to diminish rapidly less than 5 minutes before an arrest.
Resumo:
Global data-flow analysis of (constraint) logic programs, which is generally based on abstract interpretation [7], is reaching a comparatively high level of maturity. A natural question is whether it is time for its routine incorporation in standard compilers, something which, beyond a few experimental systems, has not happened to date. Such incorporation arguably makes good sense only if: • the range of applications of global analysis is large enough to justify the additional complication in the compiler, and • global analysis technology can deal with all the features of "practical" languages (e.g., the ISO-Prolog built-ins) and "scales up" for large programs. We present a tutorial overview of a number of concepts and techniques directly related to the issues above, with special emphasis on the first one. In particular, we concéntrate on novel uses of global analysis during program development and debugging, rather than on the more traditional application área of program optimization. The idea of using abstract interpretation for validation and diagnosis has been studied in the context of imperative programming [2] and also of logic programming. The latter work includes issues such as using approximations to reduce the burden posed on programmers by declarative debuggers [6, 3] and automatically generating and checking assertions [4, 5] (which includes the more traditional type checking of strongly typed languages, such as Gódel or Mercury [1, 8, 9]) We also review some solutions for scalability including modular analysis, incremental analysis, and widening. Finally, we discuss solutions for dealing with meta-predicates, side-effects, delay declarations, constraints, dynamic predicates, and other such features which may appear in practical languages. In the discussion we will draw both from the literature and from our experience and that of others in the development and use of the CIAO system analyzer. In order to emphasize the practical aspects of the solutions discussed, the presentation of several concepts will be illustrated by examples run on the CIAO system, which makes extensive use of global analysis and assertions.
Resumo:
Mode of access: Internet.
Resumo:
This study investigated group processes as potential mediators or moderators of positive development outcome and negative reduction intervention response by evaluating the utility of a group measure modified from a widely known measure of group impact found in the group therapy research literature. Four group processes were of primary interest, (1) Group Impact; (2) Facilitator Impact; (3) Skills Impact; and (4) Exploration Impact as assessed by the Session Evaluation Form (SEF). Outcome measures included the Personally Expressive Activities Questionnaire (PEAQ), Erikson Psycho-Social Index (EPSI) and the Zill Behavior Items, Behavior Problem Index (ZBI (BPI)). The sample consisted of 121 multi-ethnic participants drawn from four alternative high schools from the Miami-Dade County Public School system. Utilizing a Latent Growth Curve Modeling approach with Structural Equation Modeling (SEM) statistics, preliminary analyses were conducted to evaluate the psychometric properties of the SEF and its role in the mediation or moderation of intervention outcome. Preliminary results revealed evidence of a single higher order factor representing a "General" global reaction, which was hypothesized to be a "Positive Group Climate" construct to the program as opposed to the four distinct group processes that were initially hypothesized to affect outcomes. The results of the evaluation of the mediation or moderation role of intervention outcome of the single "General" global latent factor ("Positive Group Climate" construct) did not significantly predict treatment response on any of the outcome variables. Nevertheless, the evidence of an underlying "General" global latent factor ("Positive Group Climate" construct) has important future directions for research on positive youth development programs as well as in group therapy research.
Resumo:
An interdisciplinary field trip to a remote marine lab joined graduate students from fine arts and natural resource science departments to think creatively about the topic of climate change and science communication. We followed a learning cycle framework to allow the students to explore marine ecosystems and participate in scientific lectures, group discussions, and an artist-led project making abstract collages representing climate change processes. Students subsequently worked in small groups to develop environmental communication material for public visitors. We assessed the learning activity and the communication product using pre- and post-field trip participant surveys, focus group discussions, and critiques by art and communication experts of the products. Significant changes in knowledge about climate change occurred in program participants. Incorporating artists and the arts into this activity helped engage multiple senses and emphasized social interaction, as well as providing support to participants to think creatively. The production of art helped to encourage peer learning and normalize the different views among participants in communicating about climate change impacts. Students created effective communication products based on external reviews. Disciplinary differences in cultures, language, and standards challenged participating faculty, yet unanticipated outcomes such as potentially transformative learning and improved teacher evaluations resulted.
Resumo:
Recycled materials replacing part of virgin materials in highway applications has shown great benefits to the society and environment. Beneficial use of recycled materials can save landfill places, sparse natural resources, and energy consumed in milling and hauling virgin materials. Low price of recycled materials is favorable to cost-saving in pavement projects. Considering the availability of recycled materials in the State of Maryland (MD), four abundant recycled materials, recycled concrete aggregate (RCA), recycled asphalt pavement (RAP), foundry sand (FS), and dredged materials (DM), were studied. A survey was conducted to collect the information of current usage of the four recycled materials in States’ Department of Transportation (DOTs). Based on literature review, mechanical and environmental properties, recommendations, and suggested test standards were investigated separately for the four recycled materials in different applications. Constrains in using these materials were further studied in order to provide recommendations for the development of related MD specifications. To measure social and environmental benefits from using recycled materials, life-cycle assessment was carried out with life-cycle analysis (LCA) program, PaLATE, and green highway rating system, BEST-in-Highway. The survey results indicated the wide use of RAP and RCA in hot mix asphalt (HMA) and graded aggregate base (GAB) respectively, while FS and DM are less used in field. Environmental concerns are less, but the possibly low quality and some adverse mechanical characteristics may hinder the widely use of these recycled materials. Technical documents and current specifications provided by State DOTs are good references to the usage of these materials in MD. Literature review showed consistent results with the survey. Studies from experimental research or site tests showed satisfactory performance of these materials in highway applications, when the substitution rate, gradation, temperature, moisture, or usage of additives, etc. meet some requirements. The results from LCA revealed significant cost savings in using recycled materials. Energy and water consumption, gas emission, and hazardous waste generation generally showed reductions to some degree. Use of new recycled technologies will contribute to more sustainable highways.