857 resultados para Facial Object Based Method
Resumo:
Due to design and process-related factors, there are local variations in the microstructure and mechanical behaviour of cast components. This work establishes a Digital Image Correlation (DIC) based method for characterisation and investigation of the effects of such local variations on the behaviour of a high pressure, die cast (HPDC) aluminium alloy. Plastic behaviour is studied using gradient solidified samples and characterisation models for the parameters of the Hollomon equation are developed, based on microstructural refinement. Samples with controlled microstructural variations are produced and the observed DIC strain field is compared with Finite Element Method (FEM) simulation results. The results show that the DIC based method can be applied to characterise local mechanical behaviour with high accuracy. The microstructural variations are observed to cause a redistribution of strain during tensile loading. This redistribution of strain can be predicted in the FEM simulation by incorporating local mechanical behaviour using the developed characterization model. A homogeneous FEM simulation is unable to predict the observed behaviour. The results motivate the application of a previously proposed simulation strategy, which is able to predict and incorporate local variations in mechanical behaviour into FEM simulations already in the design process for cast components.
Resumo:
Se describe la variante homocigota c.320-2A>G de TGM1 en dos hermanas con ictiosis congénita autosómica recesiva. El clonaje de los transcritos generados por esta variante permitió identificar tres mecanismos moleculares de splicing alternativos.
Resumo:
Vitis vinifera L. cv. Crimson Seedless is a late season red table grape developed in 1989, with a high market value and increasingly cultivated under protected environments to extend the availability of seedless table grapes into the late fall. The purpose of this work was to evaluate leaf water potential and sap flow as indicators of water stress in Crimson Seedless vines under standard and reduced irrigation strategy, consisting of 70 % of the standard irrigation depth. Additionally, two sub-treatments were applied, consisting of normal irrigation throughout the growing season and a short irrigation induced stress period between veraison and harvest. Leaf water potential measurements coherently signaled crop-available water variations caused by different irrigation treatments, suggesting that this plant-based method can be reliably used to identify water-stress conditions. The use of sap flow density data to establish a ratio based on a reference ‘well irrigated vine’ and less irrigated vines can potentially be used to signal differences in the transpiration rates, which may be suitable for improving irrigation management strategies while preventing undesirable levels of water stress. Although all four irrigation strategies resulted in the production of quality table grapes, significant differences (p ≤ 0.05) were found in both berry weight and sugar content between the standard irrigation and reduced irrigation treatments. Reduced irrigation increased slightly the average berry size as well as sugar content and technical maturity index. The 2-week irrigation stress period had a negative effect on these parameters.
Resumo:
Abstract Vitis vinifera L. cv. Crimson Seedless is a late season red table grape developed in 1989, with a high market value and increasingly cultivated under protected environments to extend the availability of seedless table grapes into the late fall. The purpose of this work was to evaluate leaf water potential and sap flow as indicators of water stress in Crimson Seedless vines under standard and reduced irrigation strategy, consisting of 70 % of the standard irrigation depth. Additionally, two sub-treatments were applied, consisting of normal irrigation throughout the growing season and a short irrigation induced stress period between veraison and harvest. Leaf water potential measurements coherently signaled crop-available water variations caused by different irrigation treatments, suggesting that this plant-based method can be reliably used to identify water-stress conditions. The use of sap flow density data to establish a ratio based on a reference ‘well irrigated vine’ and less irrigated vines can potentially be used to signal differences in the transpiration rates, which may be suitable for improving irrigation management strategies while preventing undesirable levels of water stress. Although all four irrigation strategies resulted in the production of quality table grapes, significant differences (p ≤ 0.05) were found in both berry weight and sugar content between the standard irrigation and reduced irrigation treatments. Reduced irrigation increased slightly the average berry size as well as sugar content and technical maturity index. The 2-week irrigation stress period had a negative effect on these parameters.
Resumo:
This dissertation, comprised of three separate studies, focuses on the relationship between remote work adoption and employee job performance, analyzing employee social isolation and job concentration as the main mediators of this relationship. It also examines the impact of concern about COVID-19 and emotional stability as moderators of these relationships. Using a survey-based method in an emergency homeworking context, the first study found that social isolation had a negative effect on remote work productivity and satisfaction, and that COVID-19 concerns affected this relationship differently for individuals with high and low levels of concern. The second study, a diary study analyzing hybrid workers, found a positive correlation between work from home (WFH) adoption and job performance through social isolation and job concentration, with emotional stability serving respectively as a buffer and booster in the relationships between WFH and the mediators. The third study, even in this case a diary study of hybrid workers, confirmed the benefits of work from home on job performance and the importance of job concentration as a mediator, while suggesting that social isolation may not be significant when studying employee job performance, but it is relevant for employee well-being. Although each study provides autonomously a discussion and research and practical implications, this dissertation also presents a general discussion on remote work and its psychological implications, highlighting areas for future research
Resumo:
INTRODUCTION Endograft deployment is a well-known cause of arterial stiffness increase as well as arterial stiffness increase represent a recognized cardiovascular risk factor. A harmful effect on cardiac function induced by the endograft deployment should be investigated. Aim of this study was to evaluate the impact of endograft deployment on the arterial stiffness and cardiac geometry of patients treated for aortic aneurysm in order to detect modifications that could justify an increased cardiac mortality at follow-up. MATHERIALS AND METHODS Over a period of 3 years, patients undergoing elective EVAR for infrarenal aortic pathologies in two university centers in Emilia Romagna were examined. All patients underwent pre-operative and six-months post-operative Pulse Wave Velocity (PWV) examination using an ultrasound-based method performed by vascular surgeons together with trans-thoracic echocardiography examination in order to evaluate cardiac chambers geometry before and after the treatment. RESULTS 69 patients were enrolled. After 36 months, 36 patients (52%) completed the 6 months follow-up examination.The ultrasound-based carotid-femoral PWV measurements performed preoperatively and 6 months after the procedure revealed a significant postoperative increase of cf-PWV (11,6±3,6 m/sec vs 12,3±8 m/sec; p.value:0,037).Postoperative LVtdV (90±28,3 ml/m2 vs 99,1±29,7 ml/m2; p.value:0.031) LVtdVi (47,4±15,9 ml/m2 vs 51,9±14,9 ml/m2; p.value:0.050), IVStd (12±1,5 mm vs 12,1±1,3 mm; p.value:0,027) were significantly increased if compared with preoperative measures.Postoperative E/A (0,76±0,26 vs 0,6±0,67; p.value:0,011), E’ lateral (9,5±2,6 vs 7,9±2,6; p.value:0,024) and A’ septal (10,8±1,5 vs 8,9±2; p.value0,005) were significantly reduced if compared with preoperative measurements CONCLUSION The endovascular treatment of the abdominal aorta causes an immediate and significant increase of the aortic stiffness.This increase reflects negatively on patients’ cardiac geometry inducing left ventricle hypertrophy and mild diastolic disfunction after just 6 months from endograft’s implantation.Further investigations and long-term results are necessary to access if this negative remodeling could affect the cardiac outcome of patient treated using the endovascular approach.
Resumo:
Earthquake prediction is a complex task for scientists due to the rare occurrence of high-intensity earthquakes and their inaccessible depths. Despite this challenge, it is a priority to protect infrastructure, and populations living in areas of high seismic risk. Reliable forecasting requires comprehensive knowledge of seismic phenomena. In this thesis, the development, application, and comparison of both deterministic and probabilistic forecasting methods is shown. Regarding the deterministic approach, the implementation of an alarm-based method using the occurrence of strong (fore)shocks, widely felt by the population, as a precursor signal is described. This model is then applied for retrospective prediction of Italian earthquakes of magnitude M≥5.0,5.5,6.0, occurred in Italy from 1960 to 2020. Retrospective performance testing is carried out using tests and statistics specific to deterministic alarm-based models. Regarding probabilistic models, this thesis focuses mainly on the EEPAS and ETAS models. Although the EEPAS model has been previously applied and tested in some regions of the world, it has never been used for forecasting Italian earthquakes. In the thesis, the EEPAS model is used to retrospectively forecast Italian shallow earthquakes with a magnitude of M≥5.0 using new MATLAB software. The forecasting performance of the probabilistic models was compared to other models using CSEP binary tests. The EEPAS and ETAS models showed different characteristics for forecasting Italian earthquakes, with EEPAS performing better in the long-term and ETAS performing better in the short-term. The FORE model based on strong precursor quakes is compared to EEPAS and ETAS using an alarm-based deterministic approach. All models perform better than a random forecasting model, with ETAS and FORE models showing better performance. However, to fully evaluate forecasting performance, prospective tests should be conducted. The lack of objective tests for evaluating deterministic models and comparing them with probabilistic ones was a challenge faced during the study.
Resumo:
Long-term monitoring of acoustical environments is gaining popularity thanks to the relevant amount of scientific and engineering insights that it provides. The increasing interest is due to the constant growth of storage capacity and computational power to process large amounts of data. In this perspective, machine learning (ML) provides a broad family of data-driven statistical techniques to deal with large databases. Nowadays, the conventional praxis of sound level meter measurements limits the global description of a sound scene to an energetic point of view. The equivalent continuous level Leq represents the main metric to define an acoustic environment, indeed. Finer analyses involve the use of statistical levels. However, acoustic percentiles are based on temporal assumptions, which are not always reliable. A statistical approach, based on the study of the occurrences of sound pressure levels, would bring a different perspective to the analysis of long-term monitoring. Depicting a sound scene through the most probable sound pressure level, rather than portions of energy, brought more specific information about the activity carried out during the measurements. The statistical mode of the occurrences can capture typical behaviors of specific kinds of sound sources. The present work aims to propose an ML-based method to identify, separate and measure coexisting sound sources in real-world scenarios. It is based on long-term monitoring and is addressed to acousticians focused on the analysis of environmental noise in manifold contexts. The presented method is based on clustering analysis. Two algorithms, Gaussian Mixture Model and K-means clustering, represent the main core of a process to investigate different active spaces monitored through sound level meters. The procedure has been applied in two different contexts: university lecture halls and offices. The proposed method shows robust and reliable results in describing the acoustic scenario and it could represent an important analytical tool for acousticians.
Resumo:
Monitoring thunderstorms activity is an essential part of operational weather surveillance given their potential hazards, including lightning, hail, heavy rainfall, strong winds or even tornadoes. This study has two main objectives: firstly, the description of a methodology, based on radar and total lightning data to characterise thunderstorms in real-time; secondly, the application of this methodology to 66 thunderstorms that affected Catalonia (NE Spain) in the summer of 2006. An object-oriented tracking procedure is employed, where different observation data types generate four different types of objects (radar 1-km CAPPI reflectivity composites, radar reflectivity volumetric data, cloud-to-ground lightning data and intra-cloud lightning data). In the framework proposed, these objects are the building blocks of a higher level object, the thunderstorm. The methodology is demonstrated with a dataset of thunderstorms whose main characteristics, along the complete life cycle of the convective structures (development, maturity and dissipation), are described statistically. The development and dissipation stages present similar durations in most cases examined. On the contrary, the duration of the maturity phase is much more variable and related to the thunderstorm intensity, defined here in terms of lightning flash rate. Most of the activity of IC and CG flashes is registered in the maturity stage. In the development stage little CG flashes are observed (2% to 5%), while for the dissipation phase is possible to observe a few more CG flashes (10% to 15%). Additionally, a selection of thunderstorms is used to examine general life cycle patterns, obtained from the analysis of normalized (with respect to thunderstorm total duration and maximum value of variables considered) thunderstorm parameters. Among other findings, the study indicates that the normalized duration of the three stages of thunderstorm life cycle is similar in most thunderstorms, with the longest duration corresponding to the maturity stage (approximately 80% of the total time).
Resumo:
[EN]The human face provides useful information during interaction; therefore, any system integrating Vision- BasedHuman Computer Interaction requires fast and reliable face and facial feature detection. Different approaches have focused on this ability but only open source implementations have been extensively used by researchers. A good example is the Viola–Jones object detection framework that particularly in the context of facial processing has been frequently used.
Resumo:
A parts based model is a parametrization of an object class using a collection of landmarks following the object structure. The matching of parts based models is one of the problems where pairwise Conditional Random Fields have been successfully applied. The main reason of their effectiveness is tractable inference and learning due to the simplicity of involved graphs, usually trees. However, these models do not consider possible patterns of statistics among sets of landmarks, and thus they sufffer from using too myopic information. To overcome this limitation, we propoese a novel structure based on a hierarchical Conditional Random Fields, which we explain in the first part of this memory. We build a hierarchy of combinations of landmarks, where matching is performed taking into account the whole hierarchy. To preserve tractable inference we effectively sample the label set. We test our method on facial feature selection and human pose estimation on two challenging datasets: Buffy and MultiPIE. In the second part of this memory, we present a novel approach to multiple kernel combination that relies on stacked classification. This method can be used to evaluate the landmarks of the parts-based model approach. Our method is based on combining responses of a set of independent classifiers for each individual kernel. Unlike earlier approaches that linearly combine kernel responses, our approach uses them as inputs to another set of classifiers. We will show that we outperform state-of-the-art methods on most of the standard benchmark datasets.
Resumo:
Object detection is a fundamental task of computer vision that is utilized as a core part in a number of industrial and scientific applications, for example, in robotics, where objects need to be correctly detected and localized prior to being grasped and manipulated. Existing object detectors vary in (i) the amount of supervision they need for training, (ii) the type of a learning method adopted (generative or discriminative) and (iii) the amount of spatial information used in the object model (model-free, using no spatial information in the object model, or model-based, with the explicit spatial model of an object). Although some existing methods report good performance in the detection of certain objects, the results tend to be application specific and no universal method has been found that clearly outperforms all others in all areas. This work proposes a novel generative part-based object detector. The generative learning procedure of the developed method allows learning from positive examples only. The detector is based on finding semantically meaningful parts of the object (i.e. a part detector) that can provide additional information to object location, for example, pose. The object class model, i.e. the appearance of the object parts and their spatial variance, constellation, is explicitly modelled in a fully probabilistic manner. The appearance is based on bio-inspired complex-valued Gabor features that are transformed to part probabilities by an unsupervised Gaussian Mixture Model (GMM). The proposed novel randomized GMM enables learning from only a few training examples. The probabilistic spatial model of the part configurations is constructed with a mixture of 2D Gaussians. The appearance of the parts of the object is learned in an object canonical space that removes geometric variations from the part appearance model. Robustness to pose variations is achieved by object pose quantization, which is more efficient than previously used scale and orientation shifts in the Gabor feature space. Performance of the resulting generative object detector is characterized by high recall with low precision, i.e. the generative detector produces large number of false positive detections. Thus a discriminative classifier is used to prune false positive candidate detections produced by the generative detector improving its precision while keeping high recall. Using only a small number of positive examples, the developed object detector performs comparably to state-of-the-art discriminative methods.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
In this letter, a semiautomatic method for road extraction in object space is proposed that combines a stereoscopic pair of low-resolution aerial images with a digital terrain model (DTM) structured as a triangulated irregular network (TIN). First, we formulate an objective function in the object space to allow the modeling of roads in 3-D. In this model, the TIN-based DTM allows the search for the optimal polyline to be restricted along a narrow band that is overlaid upon it. Finally, the optimal polyline for each road is obtained by optimizing the objective function using the dynamic programming optimization algorithm. A few seed points need to be supplied by an operator. To evaluate the performance of the proposed method, a set of experiments was designed using two stereoscopic pairs of low-resolution aerial images and a TIN-based DTM with an average resolution of 1 m. The experimental results showed that the proposed method worked properly, even when faced with anomalies along roads, such as obstructions caused by shadows and trees.
Resumo:
Facial reconstruction is a method that seeks to recreate a person's facial appearance from his/her skull. This technique can be the last resource used in a forensic investigation, when identification techniques such as DNA analysis, dental records, fingerprints and radiographic comparison cannot be used to identify a body or skeletal remains. To perform facial reconstruction, the data of facial soft tissue thickness are necessary. Scientific literature has described differences in the thickness of facial soft tissue between ethnic groups. There are different databases of soft tissue thickness published in the scientific literature. There are no literature records of facial reconstruction works carried out with data of soft tissues obtained from samples of Brazilian subjects. There are also no reports of digital forensic facial reconstruction performed in Brazil. There are two databases of soft tissue thickness published for the Brazilian population: one obtained from measurements performed in fresh cadavers (fresh cadavers' pattern), and another from measurements using magnetic resonance imaging (Magnetic Resonance pattern). This study aims to perform three different characterized digital forensic facial reconstructions (with hair, eyelashes and eyebrows) of a Brazilian subject (based on an international pattern and two Brazilian patterns for soft facial tissue thickness), and evaluate the digital forensic facial reconstructions comparing them to photos of the individual and other nine subjects. The DICOM data of the Computed Tomography (CT) donated by a volunteer were converted into stereolitography (STL) files and used for the creation of the digital facial reconstructions. Once the three reconstructions were performed, they were compared to photographs of the subject who had the face reconstructed and nine other subjects. Thirty examiners participated in this recognition process. The target subject was recognized by 26.67% of the examiners in the reconstruction performed with the Brazilian Magnetic Resonance Pattern, 23.33% in the reconstruction performed with the Brazilian Fresh Cadavers Pattern and 20.00% in the reconstruction performed with the International Pattern, in which the target-subject was the most recognized subject in the first two patterns. The rate of correct recognitions of the target subject indicate that the digital forensic facial reconstruction, conducted with parameters used in this study, may be a useful tool. (C) 2011 Elsevier Ireland Ltd. All rights reserved.