850 resultados para Local classification method
Resumo:
The routine analysis for quantization of organic acids and sugars are generally slow methods that involve the use and preparation of several reagents, require trained professional, the availability of special equipment and is expensive. In this context, it has been increasing investment in research whose purpose is the development of substitutive methods to reference, which are faster, cheap and simple, and infrared spectroscopy have been highlighted in this regard. The present study developed multivariate calibration models for the simultaneous and quantitative determination of ascorbic acid, citric, malic and tartaric and sugars sucrose, glucose and fructose, and soluble solids in juices and fruit nectars and classification models for ACP. We used methods of spectroscopy in the near infrared (Near Infrared, NIR) in association with the method regression of partial least squares (PLS). Were used 42 samples between juices and fruit nectars commercially available in local shops. For the construction of the models were performed with reference analysis using high-performance liquid chromatography (HPLC) and refractometry for the analysis of soluble solids. Subsequently, the acquisition of the spectra was done in triplicate, in the spectral range 12500 to 4000 cm-1. The best models were applied to the quantification of analytes in study on natural juices and juice samples produced in the Paraná Southwest Region. The juices used in the application of the models also underwent physical and chemical analysis. Validation of chromatographic methodology has shown satisfactory results, since the external calibration curve obtained R-square value (R2) above 0.98 and coefficient of variation (%CV) for intermediate precision and repeatability below 8.83%. Through the Principal Component Analysis (PCA) was possible to separate samples of juices into two major groups, grape and apple and tangerine and orange, while for nectars groups separated guava and grape, and pineapple and apple. Different validation methods, and pre-processes that were used separately and in combination, were obtained with multivariate calibration models with average forecast square error (RMSEP) and cross validation (RMSECV) errors below 1.33 and 1.53 g.100 mL-1, respectively and R2 above 0.771, except for malic acid. The physicochemical analysis enabled the characterization of drinks, including the pH working range (variation of 2.83 to 5.79) and acidity within the parameters Regulation for each flavor. Regression models have demonstrated the possibility of determining both ascorbic acids, citric, malic and tartaric with successfully, besides sucrose, glucose and fructose by means of only a spectrum, suggesting that the models are economically viable for quality control and product standardization in the fruit juice and nectars processing industry.
Resumo:
Nurse rostering is a complex scheduling problem that affects hospital personnel on a daily basis all over the world. This paper presents a new component-based approach with adaptive perturbations, for a nurse scheduling problem arising at a major UK hospital. The main idea behind this technique is to decompose a schedule into its components (i.e. the allocated shift pattern of each nurse), and then mimic a natural evolutionary process on these components to iteratively deliver better schedules. The worthiness of all components in the schedule has to be continuously demonstrated in order for them to remain there. This demonstration employs a dynamic evaluation function which evaluates how well each component contributes towards the final objective. Two perturbation steps are then applied: the first perturbation eliminates a number of components that are deemed not worthy to stay in the current schedule; the second perturbation may also throw out, with a low level of probability, some worthy components. The eliminated components are replenished with new ones using a set of constructive heuristics using local optimality criteria. Computational results using 52 data instances demonstrate the applicability of the proposed approach in solving real-world problems.
Resumo:
This work is a study in the Local Productive Arrangement of confections from Agreste of Pernambuco, as a relevant sector in economic and social aspect. This research has as central aim to understand how the inter-organizational relations influence the collective efficiency of arrangement. The theoretical framework employed highlights the approaches that deal with the benefits of business agglomeration for the development of firms and regions. It has discussed the approach of small and medium enter prises and industrial districts (SCHMITZ, 1997), which introduce the concept of col lective efficiency, explaining that only those externalities explained by Marshall (1996) are not sufficient to explain the competitive advantage of enterprises, expanding the idea that organizations achieve competitive advantage not acting alone. To examine the influences of relations in the collective efficiency, it has been taken as analytical perspective theory of social networks (GRANOVETTER, 1973, 1985; BURT, 1992; UZZI, 1997) because it has believe that this approach provides subsi dies for a structural analysis of social relationships in face the behavior of human action. By examining the organizations in a social network, you should understand the reason of this establishment of the relationship, their benefits, and as the information flow takes place and density of links between the actors (Powell; SMITH-DOERR, 1994). As for the methods, this study is characterized as a case study, in according to the purposed objectives, in addition to qualitative method. Also, due to recovering of the historical milestones of the arrangement, it is used a sectional approach with longitudinal perspective (VIEIRA, 2004). The primary and secondary data were used in order to understand the evolutionary process of the sector and their inter-actors re lationships in the arrangement for the promotion of development, for both, was used the contend and documentary analysis technique, respectively (DELLAGNELO ; SILVA, 2005). The approach of social networks has permitted understand that social relationships may extend the collective efficiency of the arrangement, and therefore need to develop policies that encourage the legalization of informal companies in arrangement, by showing up themselves representative. Thus, the relations estab lished in LPA of confections from Agreste of Pernambuco need for more effective mechanisms to broaden the collective efficiency. Therefore, this way as take place has directly benefited only a group of companies that are linked in some way the supportive institutions. So we can conclude that the inter-actor relations have limited the collective efficiency of LPA, being stimulated by the institutions in support only to groups of entrepreneurs, even those that produce external relations for all clustered companies
Resumo:
This thesis proposes a generic visual perception architecture for robotic clothes perception and manipulation. This proposed architecture is fully integrated with a stereo vision system and a dual-arm robot and is able to perform a number of autonomous laundering tasks. Clothes perception and manipulation is a novel research topic in robotics and has experienced rapid development in recent years. Compared to the task of perceiving and manipulating rigid objects, clothes perception and manipulation poses a greater challenge. This can be attributed to two reasons: firstly, deformable clothing requires precise (high-acuity) visual perception and dexterous manipulation; secondly, as clothing approximates a non-rigid 2-manifold in 3-space, that can adopt a quasi-infinite configuration space, the potential variability in the appearance of clothing items makes them difficult to understand, identify uniquely, and interact with by machine. From an applications perspective, and as part of EU CloPeMa project, the integrated visual perception architecture refines a pre-existing clothing manipulation pipeline by completing pre-wash clothes (category) sorting (using single-shot or interactive perception for garment categorisation and manipulation) and post-wash dual-arm flattening. To the best of the author’s knowledge, as investigated in this thesis, the autonomous clothing perception and manipulation solutions presented here were first proposed and reported by the author. All of the reported robot demonstrations in this work follow a perception-manipulation method- ology where visual and tactile feedback (in the form of surface wrinkledness captured by the high accuracy depth sensor i.e. CloPeMa stereo head or the predictive confidence modelled by Gaussian Processing) serve as the halting criteria in the flattening and sorting tasks, respectively. From scientific perspective, the proposed visual perception architecture addresses the above challenges by parsing and grouping 3D clothing configurations hierarchically from low-level curvatures, through mid-level surface shape representations (providing topological descriptions and 3D texture representations), to high-level semantic structures and statistical descriptions. A range of visual features such as Shape Index, Surface Topologies Analysis and Local Binary Patterns have been adapted within this work to parse clothing surfaces and textures and several novel features have been devised, including B-Spline Patches with Locality-Constrained Linear coding, and Topology Spatial Distance to describe and quantify generic landmarks (wrinkles and folds). The essence of this proposed architecture comprises 3D generic surface parsing and interpretation, which is critical to underpinning a number of laundering tasks and has the potential to be extended to other rigid and non-rigid object perception and manipulation tasks. The experimental results presented in this thesis demonstrate that: firstly, the proposed grasp- ing approach achieves on-average 84.7% accuracy; secondly, the proposed flattening approach is able to flatten towels, t-shirts and pants (shorts) within 9 iterations on-average; thirdly, the proposed clothes recognition pipeline can recognise clothes categories from highly wrinkled configurations and advances the state-of-the-art by 36% in terms of classification accuracy, achieving an 83.2% true-positive classification rate when discriminating between five categories of clothes; finally the Gaussian Process based interactive perception approach exhibits a substantial improvement over single-shot perception. Accordingly, this thesis has advanced the state-of-the-art of robot clothes perception and manipulation.
Resumo:
Energy Conservation Measure (ECM) project selection is made difficult given real-world constraints, limited resources to implement savings retrofits, various suppliers in the market and project financing alternatives. Many of these energy efficient retrofit projects should be viewed as a series of investments with annual returns for these traditionally risk-averse agencies. Given a list of ECMs available, federal, state and local agencies must determine how to implement projects at lowest costs. The most common methods of implementation planning are suboptimal relative to cost. Federal, state and local agencies can obtain greater returns on their energy conservation investment over traditional methods, regardless of the implementing organization. This dissertation outlines several approaches to improve the traditional energy conservations models. Any public buildings in regions with similar energy conservation goals in the United States or internationally can also benefit greatly from this research. Additionally, many private owners of buildings are under mandates to conserve energy e.g., Local Law 85 of the New York City Energy Conservation Code requires any building, public or private, to meet the most current energy code for any alteration or renovation. Thus, both public and private stakeholders can benefit from this research. The research in this dissertation advances and presents models that decision-makers can use to optimize the selection of ECM projects with respect to the total cost of implementation. A practical application of a two-level mathematical program with equilibrium constraints (MPEC) improves the current best practice for agencies concerned with making the most cost-effective selection leveraging energy services companies or utilities. The two-level model maximizes savings to the agency and profit to the energy services companies (Chapter 2). An additional model presented leverages a single congressional appropriation to implement ECM projects (Chapter 3). Returns from implemented ECM projects are used to fund additional ECM projects. In these cases, fluctuations in energy costs and uncertainty in the estimated savings severely influence ECM project selection and the amount of the appropriation requested. A risk aversion method proposed imposes a minimum on the number of “of projects completed in each stage. A comparative method using Conditional Value at Risk is analyzed. Time consistency was addressed in this chapter. This work demonstrates how a risk-based, stochastic, multi-stage model with binary decision variables at each stage provides a much more accurate estimate for planning than the agency’s traditional approach and deterministic models. Finally, in Chapter 4, a rolling-horizon model allows for subadditivity and superadditivity of the energy savings to simulate interactive effects between ECM projects. The approach makes use of inequalities (McCormick, 1976) to re-express constraints that involve the product of binary variables with an exact linearization (related to the convex hull of those constraints). This model additionally shows the benefits of learning between stages while remaining consistent with the single congressional appropriations framework.
Resumo:
The paper catalogues the procedures and steps involved in agroclimatic classification. These vary from conventional descriptive methods to modern computer-based numerical techniques. There are three mutually independent numerical classification techniques, namely Ordination, Cluster analysis, and Minimum spanning tree; and under each technique there are several forms of grouping techniques existing. The vhoice of numerical classification procedure differs with the type of data set. In the case of numerical continuous data sets with booth positive and negative values, the simple and least controversial procedures are unweighted pair group method (UPGMA) and weighted pair group method (WPGMA) under clustering techniques with similarity measure obtained either from Gower metric or standardized Euclidean metric. Where the number of attributes are large, these could be reduced to fewer new attributes defined by the principal components or coordinates by ordination technique. The first few components or coodinates explain the maximum variance in the data matrix. These revided attributes are less affected by noise in the data set. It is possible to check misclassifications using minimum spanning tree.
Resumo:
In the first part of this thesis we search for beyond the Standard Model physics through the search for anomalous production of the Higgs boson using the razor kinematic variables. We search for anomalous Higgs boson production using proton-proton collisions at center of mass energy √s=8 TeV collected by the Compact Muon Solenoid experiment at the Large Hadron Collider corresponding to an integrated luminosity of 19.8 fb-1.
In the second part we present a novel method for using a quantum annealer to train a classifier to recognize events containing a Higgs boson decaying to two photons. We train that classifier using simulated proton-proton collisions at √s=8 TeV producing either a Standard Model Higgs boson decaying to two photons or a non-resonant Standard Model process that produces a two photon final state.
The production mechanisms of the Higgs boson are precisely predicted by the Standard Model based on its association with the mechanism of electroweak symmetry breaking. We measure the yield of Higgs bosons decaying to two photons in kinematic regions predicted to have very little contribution from a Standard Model Higgs boson and search for an excess of events, which would be evidence of either non-standard production or non-standard properties of the Higgs boson. We divide the events into disjoint categories based on kinematic properties and the presence of additional b-quarks produced in the collisions. In each of these disjoint categories, we use the razor kinematic variables to characterize events with topological configurations incompatible with typical configurations found from standard model production of the Higgs boson.
We observe an excess of events with di-photon invariant mass compatible with the Higgs boson mass and localized in a small region of the razor plane. We observe 5 events with a predicted background of 0.54 ± 0.28, which observation has a p-value of 10-3 and a local significance of 3.35σ. This background prediction comes from 0.48 predicted non-resonant background events and 0.07 predicted SM higgs boson events. We proceed to investigate the properties of this excess, finding that it provides a very compelling peak in the di-photon invariant mass distribution and is physically separated in the razor plane from predicted background. Using another method of measuring the background and significance of the excess, we find a 2.5σ deviation from the Standard Model hypothesis over a broader range of the razor plane.
In the second part of the thesis we transform the problem of training a classifier to distinguish events with a Higgs boson decaying to two photons from events with other sources of photon pairs into the Hamiltonian of a spin system, the ground state of which is the best classifier. We then use a quantum annealer to find the ground state of this Hamiltonian and train the classifier. We find that we are able to do this successfully in less than 400 annealing runs for a problem of median difficulty at the largest problem size considered. The networks trained in this manner exhibit good classification performance, competitive with the more complicated machine learning techniques, and are highly resistant to overtraining. We also find that the nature of the training gives access to additional solutions that can be used to improve the classification performance by up to 1.2% in some regions.
Resumo:
This paper analyses the advantages and limitations in using the Troll, Hargreaves and modified Thornthwaite approaches for the demarcation of the semi-arid tropics. Data from India, Africa, Brazil, Australia and Thailand, were used for the comparison of these three methods. The modified Thornthwaite approach provided the most relevant agriculturally oriented demarcation of the semi-arid tropics. This method in not only simple, tut uses input data that are avaliable for a global network of stations. Using this method the semi-arid tropics include major dryland or rainfed agricultural zones with annual rainfall varying from about 400 to 1,250 mm. Major dryland crops are pearl millet, sorghum, pigeonpea and groundnut. This paper also presents the brief description of climate, soils and farming systems of the semi-arid tropics.
Resumo:
Dissertação de Mestrado, Direção e Gestão Hoteleira, Escola Superior de Gestão, Hotelaria e Turismo, Universidade do Algarve, 2015
Resumo:
The goal of the research was to investigate the energy performance of residential vertical buildings envelope in the hot and humid climate of Natal, capital of Rio Grande do Norte, based in the Technical Regulation of Quality for Energy Efficiency Level in Residential Buildings (RTQ -R), launched in 2010. The study pretends to contribute to the development of design strategies appropriate to the specific local climate and the increasing of energy efficiency level of the envelope. The methodological procedures included the survey in 22 (twenty two) residential buildings, the formulation of representative prototypes based on typological and constructives characters researched and the classification of the level of energy efficiency in the envelopment of these prototypes, using as a tool the prescriptive method of the RTQ-R and the parametric analyzes from assigning different values of the following variables: shape of the pavement type; distribution of housing compartments; orientation of the building; area and shading of openings; thermal transmittance, and solar absorptance of opaque materials of the frontage in order to evaluate the influence of these on the envelopment performance. The main results accomplished with this work includes the qualification of vertical residential buildings in Natal/RN; the verification of the adequacy of these buildings to local climate based from the diagnosis of the thermal energy of the envelopment performance, the identification of variables with more significant influence on the prescriptive methodology of RTQ-R and design solutions more favorable to obtain higher levels energy efficiency by this method. Finally, it was verified, that some of these solutions proved contradictory in relation to the recommendations contained in the theoretical approaches regarding environmental comfort in hot and humid weather, which indicates the need for improvement of the prescriptive method RTQ-R and further research on efficient design solutions
Resumo:
Mestrado em Ciências Actuariais
Resumo:
Este trabalho aborda a questão dos atos preparatórios no local do crime, isto é, as medidas cautelares e de polícia que o primeiro interveniente policial que chega ao local deve aplicar. As diligências efetuadas pelo primeiro militar que assume uma ocorrência e que, normalmente, não é especialista na área da Investigação Criminal ou forense, revelam-se de elevada importância para o sucesso da investigação pois repercutem-se ao longo de toda a investigação. Essa abordagem caracteriza-se por não visar uma intervenção investigatória e inspetiva, mas sim de prevenção e proteção do local. O objetivo geral do trabalho consiste numa contribuição para uma exploração mais determinada do local onde foi cometido um crime, através do melhor desempenho possível do primeiro interveniente policial. Os objetivos específicos passam por definir os procedimentos a tomar pelo primeiro interveniente (tendo em conta a sua especialidade, materiais e particularidades da fase da investigação) e definir o que é, para ele, um crime de cenário, identificando as possíveis repercussões de uma má gestão do local do crime para o sucesso da investigação. Utilizamos o método comparativo, estudando os diferentes Manuais de procedimentos (nacionais e internacionais). O quadro de referência é o materialismo histórico pois enfatizamos a dimensão histórica dos processos sociais, a legislação vigente e os problemas atuais para interpretar o nosso estudo. Este trabalho assume contornos exploratório-explicativos. Seguimos um método dedutivo, pois pretende-se chegar a um caso particular da lei geral, ou seja, aos procedimentos específicos do primeiro interveniente policial entre toda a gestão do local do crime. Os resultados mais significativos são a justificação da importância do local do crime para a Investigação Criminal e da complexidade que pode advir para o trabalho do primeiro interveniente. É possível concluir um conjunto padrão de ações que devem ser tomadas (guia prático) e como se pode melhorar a intervenção através de formação e cooperação entre os elementos.
Resumo:
Background: Premature infants, who have to spend the first week of their lives in neonatal intensive care units (NICUs), experience pain and stress in numerous cases, and they are exposed to many invasive interventions. The studies have shown that uncontrolled pain experienced during early life has negative and long-term side effects, such as distress, and such experiences negatively affect the development of the central nervous system Objectives: The purpose of the study was to examine the effects of touching on infant pain perception and the effects of eutectic mixture of local anesthetic (EMLA) on the reduction of pain. Patients and Methods: Data for the study were collected between March and August 2012 from the neonatal clinic of a university hospital located in eastern Turkey. The population of the study consisted of premature infants who were undergoing treatment, completed the first month and who were approved for Hepatitis B vaccine. The study consisted of two experimental groups and one control group. Information forms, intervention follow-up forms, and Premature Infant Pain Profile (PIPP) were used to collect the data. EMLA cream was applied on the vastus lateralis muscles of the first experimental group before the vaccination. The second experimental group was vaccinated by imitation (placebo), without a needle tip or medicine. Vaccination was carried out using instrumental touch in this group. A routine vaccination was applied in the control group. Results: Mean pain scores of the group to which EMLA was applied were lower in a statistically significant way (P < 0.05) compared to the pain scores of the other groups. Moreover, it was determined that even though invasive intervention was not applied to the newborns, the touching caused them to feel pain just as in the placebo group (P < 0.005). Conclusions: The results demonstrated that EMLA was an effective method for reducing pain in premature newborns, and the use of instrumental touch for invasive intervention stimulated the pain perception in the newborns.
Resumo:
Past and recent observations have shown that the local site conditions significantly affect the behavior of seismic waves and its potential to cause destructive earthquakes. Thus, seismic microzonation studies have become crucial for seismic hazard assessment, providing local soil characteristics that can help to evaluate the possible seismic effects. Among the different methods used for estimating the soil characteristics, the ones based on ambient noise measurements, such as the H/V technique, become a cheap, non-invasive and successful way for evaluating the soil properties along a studied area. In this work, ambient noise measurements were taken at 240 sites around the Doon Valley, India, in order to characterize the sediment deposits. First, the H/V analysis has been carried out to estimate the resonant frequencies along the valley. Subsequently, some of this H/V results have been inverted, using the neighborhood algorithm and the available geotechnical information, in order to provide an estimation of the S-wave velocity profiles at the studied sites. Using all these information, we have characterized the sedimentary deposits in different areas of the Doon Valley, providing the resonant frequency, the soil thickness, the mean S-wave velocity of the sediments, and the mean S-wave velocity in the uppermost 30 m.
Resumo:
The fluctuation in water demand in the Redland community of Miami-Dade County was examined using land use data from 2001 and 2011 and water estimation techniques provided by local and state agencies. The data was converted to 30 m mosaicked raster grids that indicated land use change, and associated water demand measured in gallons per day per acre. The results indicate that, first, despite an increase in population, water demand decreased overall in Redland from 2001 to 2011. Second, conversion of agricultural lands to residential lands actually caused a decrease in water demand in most cases while acquisition of farmland by public agencies also caused a sharp decline. Third, conversion of row crops and groves to nurseries was substantial and resulted in a significant increase in water demand in all such areas converted. Finally, estimating water demand based on land use, rather than population, is a more accurate approach.