967 resultados para Railroad safety, Bayesian methods, Accident modification factor, Countermeasure selection
Resumo:
A reliable perception of the real world is a key-feature for an autonomous vehicle and the Advanced Driver Assistance Systems (ADAS). Obstacles detection (OD) is one of the main components for the correct reconstruction of the dynamic world. Historical approaches based on stereo vision and other 3D perception technologies (e.g. LIDAR) have been adapted to the ADAS first and autonomous ground vehicles, after, providing excellent results. The obstacles detection is a very broad field and this domain counts a lot of works in the last years. In academic research, it has been clearly established the essential role of these systems to realize active safety systems for accident prevention, reflecting also the innovative systems introduced by industry. These systems need to accurately assess situational criticalities and simultaneously assess awareness of these criticalities by the driver; it requires that the obstacles detection algorithms must be reliable and accurate, providing: a real-time output, a stable and robust representation of the environment and an estimation independent from lighting and weather conditions. Initial systems relied on only one exteroceptive sensor (e.g. radar or laser for ACC and camera for LDW) in addition to proprioceptive sensors such as wheel speed and yaw rate sensors. But, current systems, such as ACC operating at the entire speed range or autonomous braking for collision avoidance, require the use of multiple sensors since individually they can not meet these requirements. It has led the community to move towards the use of a combination of them in order to exploit the benefits of each one. Pedestrians and vehicles detection are ones of the major thrusts in situational criticalities assessment, still remaining an active area of research. ADASs are the most prominent use case of pedestrians and vehicles detection. Vehicles should be equipped with sensing capabilities able to detect and act on objects in dangerous situations, where the driver would not be able to avoid a collision. A full ADAS or autonomous vehicle, with regard to pedestrians and vehicles, would not only include detection but also tracking, orientation, intent analysis, and collision prediction. The system detects obstacles using a probabilistic occupancy grid built from a multi-resolution disparity map. Obstacles classification is based on an AdaBoost SoftCascade trained on Aggregate Channel Features. A final stage of tracking and fusion guarantees stability and robustness to the result.
Resumo:
We propose a Bayesian framework for regression problems, which covers areas which are usually dealt with by function approximation. An online learning algorithm is derived which solves regression problems with a Kalman filter. Its solution always improves with increasing model complexity, without the risk of over-fitting. In the infinite dimension limit it approaches the true Bayesian posterior. The issues of prior selection and over-fitting are also discussed, showing that some of the commonly held beliefs are misleading. The practical implementation is summarised. Simulations using 13 popular publicly available data sets are used to demonstrate the method and highlight important issues concerning the choice of priors.
Resumo:
Conventional feed forward Neural Networks have used the sum-of-squares cost function for training. A new cost function is presented here with a description length interpretation based on Rissanen's Minimum Description Length principle. It is a heuristic that has a rough interpretation as the number of data points fit by the model. Not concerned with finding optimal descriptions, the cost function prefers to form minimum descriptions in a naive way for computational convenience. The cost function is called the Naive Description Length cost function. Finding minimum description models will be shown to be closely related to the identification of clusters in the data. As a consequence the minimum of this cost function approximates the most probable mode of the data rather than the sum-of-squares cost function that approximates the mean. The new cost function is shown to provide information about the structure of the data. This is done by inspecting the dependence of the error to the amount of regularisation. This structure provides a method of selecting regularisation parameters as an alternative or supplement to Bayesian methods. The new cost function is tested on a number of multi-valued problems such as a simple inverse kinematics problem. It is also tested on a number of classification and regression problems. The mode-seeking property of this cost function is shown to improve prediction in time series problems. Description length principles are used in a similar fashion to derive a regulariser to control network complexity.
Resumo:
The literature discusses several methods to control for self-selection effects but provides little guidance on which method to use in a setting with a limited number of variables. The authors theoretically compare and empirically assess the performance of different matching methods and instrumental variable and control function methods in this type of setting by investigating the effect of online banking on product usage. Hybrid matching in combination with the Gaussian kernel algorithm outperforms the other methods with respect to predictive validity. The empirical finding of large self-selection effects indicates the importance of controlling for these effects when assessing the effectiveness of marketing activities.
Resumo:
Tool life is an important factor to be considered during the optimisation of a machining process since cutting parameters can be adjusted to optimise tool changing, reducing cost and time of production. Also the performance of a tool is directly linked to the generated surface roughness and this is important in cases where there are strict surface quality requirements. The prediction of tool life and the resulting surface roughness in milling operations has attracted considerable research efforts. The research reported herein is focused on defining the influence of milling cutting parameters such as cutting speed, feed rate and axial depth of cut, on three major tool performance parameters namely, tool life, material removal and surface roughness. The research is seeking to define methods that will allow the selection of optimal parameters for best tool performance when face milling 416 stainless steel bars. For this study the Taguchi method was applied in a special design of an orthogonal array that allows studying the entire parameter space with only a number of experiments representing savings in cost and time of experiments. The findings were that the cutting speed has the most influence on tool life and surface roughness and very limited influence on material removal. By last tool life can be judged either from tool life or volume of material removal.
Resumo:
The polycystic ovary syndrome (PCOS) is considered the most common endocrine disorder in reproductive age women, with a prevalence ranging from 15 to 20%. In addition to hormonal and reproductive changes, it is common in PCOS the presence of risk factors for developing cardiovascular disease (CVD) and diabetes mellitus, insulin resistance (IR), visceral obesity, chronic low-grade inflammation and dyslipidemia. Due to the high frequency of obesity associated with PCOS, weight loss is considered as the first-line treatment for the syndrome by improving metabolic and normalizes serum androgens, restoring reproductive function of these patients. Objectives: To evaluate the inflammatory markers and IR in women with PCOS and healthy ovulatory with different nutritional status and how these parameters are displayed after weight loss through caloric restriction in with Down syndrome. Methods: Tumor necrosis factor-alpha (TNF-α), interleukin-6 (IL-6) and C-reactive protein (CRP) were assessed in serum samples from 40 women of childbearing age. The volunteers were divided into four groups: Group I (not eutrophic with PCOS, n = 12); Group II (not eutrophic without PCOS, n = 10), Group III (eutrophic with PCOS, n = 08) and Group IV (eutrophic without PCOS, n = 10). The categorization of groups was performed by body mass index (BMI), according to the World Health Organization (WHO) does not eutrophic, overweight and obesity (BMI> 25 kg / m²) and normal weight (BMI <24.9 kg / m²). IR was determined by HOMA-IR index. In the second phase of the study a controlled dietary intervention was performed and inflammatory parameters were evaluated in 21 overweight and obese women with PCOS, before and after weight loss. All patients received a low-calorie diet with reduction of 500 kcal / day of regular consumption with standard concentrations of macronutrients. Results: Phase 1: PCOS patients showed increased levels of CRP (p <0.01) and HOMAIR (p <0.01). When divided by BMI, both not eutrophic group with PCOS (I) as eutrophic with PCOS (III) showed increased levels of CRP (I = 2.35 ± 0,55mg / L and 2.63 ± III = 0,65mg / L; p <0.01) and HOMA-IR (I = 2.16 ± 2.54 and III = 1.07 ± 0.55; p <0.01). There were no differences in TNF-α and IL-6 between groups. Step 2: After the weight loss of 5% of the initial weight was reduced in all of the components of serum assessed inflammatory profile, PCR (154.75 ± 19:33) vs (78.06 ± 8.9) TNF α (10.89 ± 5.09) vs (6:39 ± 1:41) and IL6 (154.75 ± 19:33) vs (78.06 ± 08.09) (p <0:00) in association with improvement some hormonal parameters evaluated. Conclusion: PCOS contributed to the development of chronic inflammation and changes in glucose metabolism by increasing CRP, insulin and HOMA-IR, independent of nutritional status. The weight loss, caloric restriction has improved the inflammatory condition and hormonal status of the evaluated patients.
Resumo:
Valuable genetic variation for bean breeding programs is held within the common bean secondary gene pool which consists of Phaseolus albescens, P. coccineus, P. costaricensis, and P. dumosus. However, the use of close relatives for bean improvement is limited due to the lack of knowledge about genetic variation and genetic plasticity of many of these species. Characterisation and analysis of the genetic diversity is necessary among beans' wild relatives; in addition, conflicting phylogenies and relationships need to be understood and a hypothesis of a hybrid origin of P. dumosus needs to be tested. This thesis research was orientated to generate information about the patterns of relationships among the common bean secondary gene pool, with particular focus on the species Phaseolus dumosus. This species displays a set of characteristics of agronomic interest, not only for the direct improvement of common bean but also as a source of valuable genes for adaptation to climate change. Here I undertake the first comprehensive study of the genetic diversity of P. dumosus as ascertained from both nuclear and chloroplast genome markers. A germplasm collection of the ancestral forms of P. dumosus together with wild, landrace and cultivar representatives of all other species of the common bean secondary gene pool, were used to analyse genetic diversity, phylogenetic relationships and structure of P. dumosus. Data on molecular variation was generated from sequences of cpDNA loci accD-psaI spacer, trnT-trnL spacer, trnL intron and rps14-psaB spacer and from the nrDNA the ITS region. A whole genome DArT array was developed and used for the genotyping of P. dumosus and its closes relatives. 4208 polymorphic markers were generated in the DArT array and from those, 742 markers presented a call rate >95% and zero discordance. DArT markers revealed a moderate genetic polymorphism among P. dumosus samples (13% of polymorphic loci), while P. coccineus presented the highest level of polymorphism (88% of polymorphic loci). At the cpDNA one ancestral haplotype was detected among all samples of all species in the secondary genepool. The ITS region of P. dumosus revealed high homogeneity and polymorphism bias to P. coccineus genome. Phylogenetic reconstructions made with Maximum likelihood and Bayesian methods confirmed previously reported discrepancies among the nuclear and chloroplast genomes of P. dumosus. The outline of relationships by hybridization networks displayed a considerable number of interactions within and between species. This research provides compelling evidence that P. dumosus arose from hybridisation between P. vulgaris and P. coccineus and confirms that P. costaricensis has likely been involved in the genesis or backcrossing events (or both) in the history of P. dumosus. The classification of the specie P. persistentus was analysed based on cpDNA and ITS sequences, the results found this species to be highly related to P. vulgaris but not too similar to P. leptostachyus as previously proposed. This research demonstrates that wild types of the secondary genepool carry a significant genetic variation which makes this a valuable genetic resource for common bean improvement. The DArT array generated in this research is a valuable resource for breeding programs since it has the potential to be used in several approaches including genotyping, discovery of novel traits, mapping and marker-trait associations. Efforts should be made to search for potential populations of P. persistentus and to increase the collection of new populations of P. dumosus, P. albescens and P. costaricensis that may provide valuable traits for introgression into common bean and other Phaseolus crops.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Dissertação de Mestrado, Gestão Empresarial, Faculdade de Economia, Universidade do Algarve, 2015
Resumo:
Background: Intensified selection of polled individuals has recently gained importance in predominantly horned dairy cattle breeds as an alternative to routine dehorning. The status quo of the current polled breeding pool of genetically-closely related artificial insemination sires with lower breeding values for performance traits raises questions regarding the effects of intensified selection based on this founder pool. Methods: We developed a stochastic simulation framework that combines the stochastic simulation software QMSim and a self-designed R program named QUALsim that acts as an external extension. Two traits were simulated in a dairy cattle population for 25 generations: one quantitative (QMSim) and one qualitative trait with Mendelian inheritance (i.e. polledness, QUALsim). The assignment scheme for qualitative trait genotypes initiated realistic initial breeding situations regarding allele frequencies, true breeding values for the quantitative trait and genetic relatedness. Intensified selection for polled cattle was achieved using an approach that weights estimated breeding values in the animal best linear unbiased prediction model for the quantitative trait depending on genotypes or phenotypes for the polled trait with a user-defined weighting factor. Results: Selection response for the polled trait was highest in the selection scheme based on genotypes. Selection based on phenotypes led to significantly lower allele frequencies for polled. The male selection path played a significantly greater role for a fast dissemination of polled alleles compared to female selection strategies. Fixation of the polled allele implies selection based on polled genotypes among males. In comparison to a base breeding scenario that does not take polledness into account, intensive selection for polled substantially reduced genetic gain for this quantitative trait after 25 generations. Reducing selection intensity for polled males while maintaining strong selection intensity among females, simultaneously decreased losses in genetic gain and achieved a final allele frequency of 0.93 for polled. Conclusions: A fast transition to a completely polled population through intensified selection for polled was in contradiction to the preservation of high genetic gain for the quantitative trait. Selection on male polled genotypes with moderate weighting, and selection on female polled phenotypes with high weighting, could be a suitable compromise regarding all important breeding aspects.
Resumo:
Introducción: El Cáncer es prevenible en algunos casos, si se evita la exposición a sustancias cancerígenas en el medio ambiente. En Colombia, Cundinamarca es uno de los departamentos con mayores incrementos en la tasa de mortalidad y en el municipio de Sibaté, habitantes han manifestado preocupación por el incremento de la enfermedad. En el campo de la salud ambiental mundial, la georreferenciación aplicada al estudio de fenómenos en salud, ha tenido éxito con resultados válidos. El estudio propuso usar herramientas de información geográfica, para generar análisis de tiempo y espacio que hicieran visible el comportamiento del cáncer en Sibaté y sustentaran hipótesis de influencias ambientales sobre concentraciones de casos. Objetivo: Obtener incidencia y prevalencia de casos de cáncer en habitantes de Sibaté y georreferenciar los casos en un periodo de 5 años, con base en indagación de registros. Metodología: Estudio exploratorio descriptivo de corte transversal,sobre todos los diagnósticos de cáncer entre los años 2010 a 2014, encontrados en los archivos de la Secretaria de Salud municipal. Se incluyeron unicamente quienes tuvieron residencia permanente en el municipio y fueron diagnosticados con cáncer entre los años de 2010 a 2104. Sobre cada caso se obtuvo género, edad, estrato socioeconómico, nivel académico, ocupación y estado civil. Para el análisis de tiempo se usó la fecha de diagnóstico y para el análisis de espacio, la dirección de residencia, tipo de cáncer y coordenada geográfica. Se generaron coordenadas geográficas con un equipo GPS Garmin y se crearon mapas con los puntos de la ubicación de las viviendas de los pacientes. Se proceso la información, con Epi Info 7 Resultados: Se encontraron 107 casos de cáncer registrados en la Secretaria de Salud de Sibaté, 66 mujeres, 41 hombres. Sin división de género, el 30.93% de la población presento cáncer del sistema reproductor, el 18,56% digestivo y el 17,53% tegumentario. Se presentaron 2 grandes casos de agrupaciones espaciales en el territorio estudiado, una en el Barrio Pablo Neruda con 12 (21,05%) casos y en el casco Urbano de Sibaté con 38 (66,67%) casos. Conclusión: Se corroboro que el análisis geográfico con variables espacio temporales y de exposición, puede ser la herramienta para generar hipótesis sobre asociaciones de casos de cáncer con factores ambientales.
Cloud parameter retrievals from Meteosat and their effects on the shortwave radiation at the surface
Resumo:
A method based on Spinning Enhanced Visible and Infrared Imager (SEVIRI) measured reflectance at 0.6 and 3.9 µm is used to retrieve the cloud optical thickness (COT) and cloud effective radius (re) over the Iberian Peninsula. A sensitivity analysis of simulated retrievals to the input parameters demonstrates that the cloud top height is an important factor in satellite retrievals of COT and re with uncertainties around 10% for small values of COT and re; for water clouds these uncertainties can be greater than 10% for small values of re. The uncertainties found related with geometries are around 3%. The COT and re are assessed using well-known satellite cloud products, showing that the method used characterize the cloud field with more than 80% (82%) of the absolute differences between COT (re) mean values of all clouds (water plus ice clouds) centred in the range from ±10 (±10 µm), with absolute bias lower than 2 (2 μm) for COT (re) and root mean square error values lower than 10 (8 μm) for COT (re). The cloud water path (CWP), derived from satellite retrievals, and the shortwave cloud radiative effect at the surface (CRESW) are related for high fractional sky covers (Fsc >0.8), showing that water clouds produce more negative CRESW than ice clouds. The COT retrieved was also related to the cloud modification factor, which exhibits reductions and enhancements of the surface SW radiation of the order of 80% and 30%, respectively, for COT values lower than 10. A selected case study shows, using a ground-based sky camera that some situations classified by the satellite with high Fsc values correspond to situations of broken clouds where the enhancements actually occur. For this case study, a closure between the liquid water path (LWP) obtained from the satellite retrievals and the same cloud quantity obtained from ground-based microwave measurements was performed showing a good agreement between both LWP data set values.
Resumo:
The integration of distributed and ubiquitous intelligence has emerged over the last years as the mainspring of transformative advancements in mobile radio networks. As we approach the era of “mobile for intelligence”, next-generation wireless networks are poised to undergo significant and profound changes. Notably, the overarching challenge that lies ahead is the development and implementation of integrated communication and learning mechanisms that will enable the realization of autonomous mobile radio networks. The ultimate pursuit of eliminating human-in-the-loop constitutes an ambitious challenge, necessitating a meticulous delineation of the fundamental characteristics that artificial intelligence (AI) should possess to effectively achieve this objective. This challenge represents a paradigm shift in the design, deployment, and operation of wireless networks, where conventional, static configurations give way to dynamic, adaptive, and AI-native systems capable of self-optimization, self-sustainment, and learning. This thesis aims to provide a comprehensive exploration of the fundamental principles and practical approaches required to create autonomous mobile radio networks that seamlessly integrate communication and learning components. The first chapter of this thesis introduces the notion of Predictive Quality of Service (PQoS) and adaptive optimization and expands upon the challenge to achieve adaptable, reliable, and robust network performance in dynamic and ever-changing environments. The subsequent chapter delves into the revolutionary role of generative AI in shaping next-generation autonomous networks. This chapter emphasizes achieving trustworthy uncertainty-aware generation processes with the use of approximate Bayesian methods and aims to show how generative AI can improve generalization while reducing data communication costs. Finally, the thesis embarks on the topic of distributed learning over wireless networks. Distributed learning and its declinations, including multi-agent reinforcement learning systems and federated learning, have the potential to meet the scalability demands of modern data-driven applications, enabling efficient and collaborative model training across dynamic scenarios while ensuring data privacy and reducing communication overhead.
Resumo:
Electrocardiography (ECG) biometrics is emerging as a viable biometric trait. Recent developments at the sensor level have shown the feasibility of performing signal acquisition at the fingers and hand palms, using one-lead sensor technology and dry electrodes. These new locations lead to ECG signals with lower signal to noise ratio and more prone to noise artifacts; the heart rate variability is another of the major challenges of this biometric trait. In this paper we propose a novel approach to ECG biometrics, with the purpose of reducing the computational complexity and increasing the robustness of the recognition process enabling the fusion of information across sessions. Our approach is based on clustering, grouping individual heartbeats based on their morphology. We study several methods to perform automatic template selection and account for variations observed in a person's biometric data. This approach allows the identification of different template groupings, taking into account the heart rate variability, and the removal of outliers due to noise artifacts. Experimental evaluation on real world data demonstrates the advantages of our approach.
Resumo:
Feature selection is a central problem in machine learning and pattern recognition. On large datasets (in terms of dimension and/or number of instances), using search-based or wrapper techniques can be cornputationally prohibitive. Moreover, many filter methods based on relevance/redundancy assessment also take a prohibitively long time on high-dimensional. datasets. In this paper, we propose efficient unsupervised and supervised feature selection/ranking filters for high-dimensional datasets. These methods use low-complexity relevance and redundancy criteria, applicable to supervised, semi-supervised, and unsupervised learning, being able to act as pre-processors for computationally intensive methods to focus their attention on smaller subsets of promising features. The experimental results, with up to 10(5) features, show the time efficiency of our methods, with lower generalization error than state-of-the-art techniques, while being dramatically simpler and faster.