936 resultados para Cluster Analysis of Variables
Resumo:
OBJECTIVE: To identify clusters of the major occurrences of leprosy and their associated socioeconomic and demographic factors. METHODS: Cases of leprosy that occurred between 1998 and 2007 in Sao Jose do Rio Preto (southeastern Brazil) were geocodified and the incidence rates were calculated by census tract. A socioeconomic classification score was obtained using principal component analysis of socioeconomic variables. Thematic maps to visualize the spatial distribution of the incidence of leprosy with respect to socioeconomic levels and demographic density were constructed using geostatistics. RESULTS: While the incidence rate for the entire city was 10.4 cases per 100,000 inhabitants annually between 1998 and 2007, the incidence rates of individual census tracts were heterogeneous, with values that ranged from 0 to 26.9 cases per 100,000 inhabitants per year. Areas with a high leprosy incidence were associated with lower socioeconomic levels. There were identified clusters of leprosy cases, however there was no association between disease incidence and demographic density. There was a disparity between the places where the majority of ill people lived and the location of healthcare services. CONCLUSIONS: The spatial analysis techniques utilized identified the poorer neighborhoods of the city as the areas with the highest risk for the disease. These data show that health departments must prioritize politico-administrative policies to minimize the effects of social inequality and improve the standards of living, hygiene, and education of the population in order to reduce the incidence of leprosy.
Resumo:
Objective: To assess the risk factors for delayed diagnosis of uterine cervical lesions. Materials and Methods: This is a case-control study that recruited 178 women at 2 Brazilian hospitals. The cases (n = 74) were composed of women with a late diagnosis of a lesion in the uterine cervix (invasive carcinoma in any stage). The controls (n = 104) were composed of women with cervical lesions diagnosed early on (low-or high-grade intraepithelial lesions). The analysis was performed by means of logistic regression model using a hierarchical model. The socioeconomic and demographic variables were included at level I (distal). Level II (intermediate) included the personal and family antecedents and knowledge about the Papanicolaou test and human papillomavirus. Level III (proximal) encompassed the variables relating to individuals' care for their own health, gynecologic symptoms, and variables relating to access to the health care system. Results: The risk factors for late diagnosis of uterine cervical lesions were age older than 40 years (odds ratio [OR] = 10.4; 95% confidence interval [CI], 2.3-48.4), not knowing the difference between the Papanicolaou test and gynecological pelvic examinations (OR, = 2.5; 95% CI, 1.3-4.9), not thinking that the Papanicolaou test was important (odds ratio [OR], 4.2; 95% CI, 1.3-13.4), and abnormal vaginal bleeding (OR, 15.0; 95% CI, 6.5-35.0). Previous treatment for sexually transmissible disease was a protective factor (OR, 0.3; 95% CI, 0.1-0.8) for delayed diagnosis. Conclusions: Deficiencies in cervical cancer prevention programs in developing countries are not simply a matter of better provision and coverage of Papanicolaou tests. The misconception about the Papanicolaou test is a serious educational problem, as demonstrated by the present study.
Resumo:
Portable system of energy dispersive X-ray fluorescence was used to determine the elemental composition of 68 pottery fragments from Sambaqui do Bacanga, an archeological site in Sao Luis, Maranhao, Brazil. This site was occupied from 6600 BP until 900 BP. By determining the element chemical composition of those fragments, it was possible to verify the existence of engobe in 43 pottery fragments. Obtained from two-dimensional graphs and hierarchical cluster analysis performed in fragments of stratigraphies from surface and 113-cm level, and 10 to 20, 132 and 144-cm level, it was possible to group these fragments in five distinct groups, according to their stratigraphies. The results of data grouping (two-dimensional graphics) are in agreement with hierarchical cluster analysis by Ward method. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
Studies involving amplified fragment length polymorphism (cDNA-AFLP) have often used polyacrylamide gels with radiolabeled primers in order to establish best primer combinations, to analyze, and to recover transcript-derived fragments. Use of automatic sequencer to establish best primer combinations is convenient, because it saves time, reduces costs and risks of contamination with radioactive material and acrylamide, and allows objective band-matching and more precise evaluation of transcript-derived fragments intensities. This study aimed at examining the gene expression of commercial cultivars of P. guajava subjected to water and mechanical injury stresses, combining analyses by automatic sequencer and fluorescent kits for polyacrylamide gel electrophoresis. Firstly, 64 combinations of EcoRI and MseI primers were tested. Ten combinations with higher number of polymorphic fragments were then selected for transcript-derived fragments recovering and cluster analysis, involving 45 saplings of P. guajava. Two groups were obtained, one composed by the control samplings, and another formed by samplings undergoing stress, with no clear distinction between stress treatments. The results revealed the convenience of using a combination of automatic sequencer and fluorescent kits for polyacrylamide gel electrophoreses to examine gene expression profiles. The Unweighted Pair Group Method with Arithmetic Mean analysis using Euclidean distances points out a similar induced response mechanism of P. guajava undergoing water stress and mechanical injury.
Resumo:
This paper presents an alternative coupling strategy between the Boundary Element Method (BEM) and the Finite Element Method (FEM) in order to create a computational code for the analysis of geometrical nonlinear 2D frames coupled to layered soils. The soil is modeled via BEM, considering multiple inclusions and internal load lines, through an alternative formulation to eliminate traction variables on subregions interfaces. A total Lagrangean formulation based on positions is adopted for the consideration of the geometric nonlinear behavior of frame structures with exact kinematics. The numerical coupling is performed by an algebraic strategy that extracts and condenses the equivalent soil's stiffness matrix and contact forces to be introduced into the frame structures hessian matrix and internal force vector, respectively. The formulation covers the analysis of shallow foundation structures and piles in any direction. Furthermore, the piles can pass through different layers. Numerical examples are shown in order to illustrate and confirm the accuracy and applicability of the proposed technique.
Resumo:
The present study aimed to comparatively verify the relation between the hermit crabs and the shells they use in two populations of Loxopagurus loxochelis. Samples were collected monthly from July 2002 to June 2003, at Caraguatatuba and Ubatuba Bay, São Paulo, Brazil. The animals sampled had their sex identified, were weighed and measured; their shells were identified, measured and weighed, and their internal volume determined. To relate the hermit crab's characteristics and the shells' variables, principal component analysis (PCA) and a regression tree were used. According to the PCA analysis, the three gastropod shells most frequently used by L. loxochelis varied in size. The regression tree successfully explained the relationship between the hermit crab's characteristics and the internal volume of the inhabited shell. It can be inferred that the relationship between the morphometry of an individual hermit crab and its shell is not straightforward and it is impossible to explain only on the basis of direct correlations between the body's and the shell's attributes. Several factors (such as the morphometry and the availability of the shell, environmental conditions and inter- and intraspecific competition) interact and seem to be taken into consideration by the hermit crabs when they choose a shell, resulting in the diversified pattern of shell occupancy shown here and elsewhere.
Resumo:
Introduction: Enterococcus faecalis is a member of the mammalian gastrointestinal microbiota but has been considered a leading cause of hospital-acquired infections. In the oral cavity, it is commonly detected from root canals of teeth with failed endodontic treatment. However, little is known about the virulence and genetic relatedness among E. faecalis isolates from different clinical sources. This study compared the presence of enterococcal virulence factors among root canal strains and clinical isolates from hospitalized patients to identify virulent clusters of E. faecalis. Methods: Multilocus sequence typing analysis was used to determine genetic lineages of 40 E. faecalis clinical isolates from different sources. Virulence clusters were determined by evaluating capsule (cps) locus polymorphisms, pathogenicity island gene content, and antibiotic resistance genes by polymerase chain reaction. Results: The clinical isolates from hospitalized patients formed a phylogenetically separate group and were mostly grouped in the clonal complex 2, which is a known virulent cluster of E. faecalis that has caused infection outbreaks globally. The clonal complex 2 group comprised capsule-producing strains harboring multiple antibiotic resistance and pathogenicity island genes. On the other hand, the endodontic isolates were more diverse and harbored few virulence and antibiotic resistance genes. In particular, although more closely related to isolates from hospitalized patients, capsuleproducing E. faecalis strains from root canals did not carry more virulence/antibiotic genes than other endodontic isolates. Conclusions: E. faecalis isolates from endodontic infections have a genetic and virulence profile different from pathogenic clusters of hospitalized patients’ isolates, which is most likely due to niche specialization conferred mainly by variable regions in the genome.
Resumo:
Computational fluid dynamics, CFD, is becoming an essential tool in the prediction of the hydrodynamic efforts and flow characteristics of underwater vehicles for manoeuvring studies. However, when applied to the manoeuvrability of autonomous underwater vehicles, AUVs, most studies have focused on the de- termination of static coefficients without considering the effects of the vehicle control surface deflection. This paper analyses the hydrodynamic efforts generated on an AUV considering the combined effects of the control surface deflection and the angle of attack using CFD software based on the Reynolds-averaged Navier–Stokes formulations. The CFD simulations are also independently conducted for the AUV bare hull and control surface to better identify their individual and interference efforts and to validate the simulations by comparing the experimental results obtained in a towing tank. Several simulations of the bare hull case were conducted to select the k –ω SST turbulent model with the viscosity approach that best predicts its hydrodynamic efforts. Mesh sensitivity analyses were conducted for all simulations. For the flow around the control surfaces, the CFD results were analysed according to two different methodologies, standard and nonlinear. The nonlinear regression methodology provides better results than the standard methodology does for predicting the stall at the control surface. The flow simulations have shown that the occurrence of the control surface stall depends on a linear relationship between the angle of attack and the control surface deflection. This type of information can be used in designing the vehicle’s autopilot system.
Resumo:
Dengue is considered one of the most important vector-borne infection, affecting almost half of the world population with 50 to 100 million cases every year. In this paper, we present one of the simplest models that can encapsulate all the important variables related to vector control of dengue fever. The model considers the human population, the adult mosquito population and the population of immature stages, which includes eggs, larvae and pupae. The model also considers the vertical transmission of dengue in the mosquitoes and the seasonal variation in the mosquito population. From this basic model describing the dynamics of dengue infection, we deduce thresholds for avoiding the introduction of the disease and for the elimination of the disease. In particular, we deduce a Basic Reproduction Number for dengue that includes parameters related to the immature stages of the mosquito. By neglecting seasonal variation, we calculate the equilibrium values of the model’s variables. We also present a sensitivity analysis of the impact of four vector-control strategies on the Basic Reproduction Number, on the Force of Infection and on the human prevalence of dengue. Each of the strategies was studied separately from the others. The analysis presented allows us to conclude that of the available vector control strategies, adulticide application is the most effective, followed by the reduction of the exposure to mosquito bites, locating and destroying breeding places and, finally, larvicides. Current vector-control methods are concentrated on mechanical destruction of mosquitoes’ breeding places. Our results suggest that reducing the contact between vector and hosts (biting rates) is as efficient as the logistically difficult but very efficient adult mosquito’s control.
Resumo:
In Performance-Based Earthquake Engineering (PBEE), evaluating the seismic performance (or seismic risk) of a structure at a designed site has gained major attention, especially in the past decade. One of the objectives in PBEE is to quantify the seismic reliability of a structure (due to the future random earthquakes) at a site. For that purpose, Probabilistic Seismic Demand Analysis (PSDA) is utilized as a tool to estimate the Mean Annual Frequency (MAF) of exceeding a specified value of a structural Engineering Demand Parameter (EDP). This dissertation focuses mainly on applying an average of a certain number of spectral acceleration ordinates in a certain interval of periods, Sa,avg (T1,…,Tn), as scalar ground motion Intensity Measure (IM) when assessing the seismic performance of inelastic structures. Since the interval of periods where computing Sa,avg is related to the more or less influence of higher vibration modes on the inelastic response, it is appropriate to speak about improved IMs. The results using these improved IMs are compared with a conventional elastic-based scalar IMs (e.g., pseudo spectral acceleration, Sa ( T(¹)), or peak ground acceleration, PGA) and the advanced inelastic-based scalar IM (i.e., inelastic spectral displacement, Sdi). The advantages of applying improved IMs are: (i ) "computability" of the seismic hazard according to traditional Probabilistic Seismic Hazard Analysis (PSHA), because ground motion prediction models are already available for Sa (Ti), and hence it is possibile to employ existing models to assess hazard in terms of Sa,avg, and (ii ) "efficiency" or smaller variability of structural response, which was minimized to assess the optimal range to compute Sa,avg. More work is needed to assess also "sufficiency" and "scaling robustness" desirable properties, which are disregarded in this dissertation. However, for ordinary records (i.e., with no pulse like effects), using the improved IMs is found to be more accurate than using the elastic- and inelastic-based IMs. For structural demands that are dominated by the first mode of vibration, using Sa,avg can be negligible relative to the conventionally-used Sa (T(¹)) and the advanced Sdi. For structural demands with sign.cant higher-mode contribution, an improved scalar IM that incorporates higher modes needs to be utilized. In order to fully understand the influence of the IM on the seismis risk, a simplified closed-form expression for the probability of exceeding a limit state capacity was chosen as a reliability measure under seismic excitations and implemented for Reinforced Concrete (RC) frame structures. This closed-form expression is partuclarly useful for seismic assessment and design of structures, taking into account the uncertainty in the generic variables, structural "demand" and "capacity" as well as the uncertainty in seismic excitations. The assumed framework employs nonlinear Incremental Dynamic Analysis (IDA) procedures in order to estimate variability in the response of the structure (demand) to seismic excitations, conditioned to IM. The estimation of the seismic risk using the simplified closed-form expression is affected by IM, because the final seismic risk is not constant, but with the same order of magnitude. Possible reasons concern the non-linear model assumed, or the insufficiency of the selected IM. Since it is impossibile to state what is the "real" probability of exceeding a limit state looking the total risk, the only way is represented by the optimization of the desirable properties of an IM.
Resumo:
The vast majority of known proteins have not yet been experimentally characterized and little is known about their function. The design and implementation of computational tools can provide insight into the function of proteins based on their sequence, their structure, their evolutionary history and their association with other proteins. Knowledge of the three-dimensional (3D) structure of a protein can lead to a deep understanding of its mode of action and interaction, but currently the structures of <1% of sequences have been experimentally solved. For this reason, it became urgent to develop new methods that are able to computationally extract relevant information from protein sequence and structure. The starting point of my work has been the study of the properties of contacts between protein residues, since they constrain protein folding and characterize different protein structures. Prediction of residue contacts in proteins is an interesting problem whose solution may be useful in protein folding recognition and de novo design. The prediction of these contacts requires the study of the protein inter-residue distances related to the specific type of amino acid pair that are encoded in the so-called contact map. An interesting new way of analyzing those structures came out when network studies were introduced, with pivotal papers demonstrating that protein contact networks also exhibit small-world behavior. In order to highlight constraints for the prediction of protein contact maps and for applications in the field of protein structure prediction and/or reconstruction from experimentally determined contact maps, I studied to which extent the characteristic path length and clustering coefficient of the protein contacts network are values that reveal characteristic features of protein contact maps. Provided that residue contacts are known for a protein sequence, the major features of its 3D structure could be deduced by combining this knowledge with correctly predicted motifs of secondary structure. In the second part of my work I focused on a particular protein structural motif, the coiled-coil, known to mediate a variety of fundamental biological interactions. Coiled-coils are found in a variety of structural forms and in a wide range of proteins including, for example, small units such as leucine zippers that drive the dimerization of many transcription factors or more complex structures such as the family of viral proteins responsible for virus-host membrane fusion. The coiled-coil structural motif is estimated to account for 5-10% of the protein sequences in the various genomes. Given their biological importance, in my work I introduced a Hidden Markov Model (HMM) that exploits the evolutionary information derived from multiple sequence alignments, to predict coiled-coil regions and to discriminate coiled-coil sequences. The results indicate that the new HMM outperforms all the existing programs and can be adopted for the coiled-coil prediction and for large-scale genome annotation. Genome annotation is a key issue in modern computational biology, being the starting point towards the understanding of the complex processes involved in biological networks. The rapid growth in the number of protein sequences and structures available poses new fundamental problems that still deserve an interpretation. Nevertheless, these data are at the basis of the design of new strategies for tackling problems such as the prediction of protein structure and function. Experimental determination of the functions of all these proteins would be a hugely time-consuming and costly task and, in most instances, has not been carried out. As an example, currently, approximately only 20% of annotated proteins in the Homo sapiens genome have been experimentally characterized. A commonly adopted procedure for annotating protein sequences relies on the "inheritance through homology" based on the notion that similar sequences share similar functions and structures. This procedure consists in the assignment of sequences to a specific group of functionally related sequences which had been grouped through clustering techniques. The clustering procedure is based on suitable similarity rules, since predicting protein structure and function from sequence largely depends on the value of sequence identity. However, additional levels of complexity are due to multi-domain proteins, to proteins that share common domains but that do not necessarily share the same function, to the finding that different combinations of shared domains can lead to different biological roles. In the last part of this study I developed and validate a system that contributes to sequence annotation by taking advantage of a validated transfer through inheritance procedure of the molecular functions and of the structural templates. After a cross-genome comparison with the BLAST program, clusters were built on the basis of two stringent constraints on sequence identity and coverage of the alignment. The adopted measure explicity answers to the problem of multi-domain proteins annotation and allows a fine grain division of the whole set of proteomes used, that ensures cluster homogeneity in terms of sequence length. A high level of coverage of structure templates on the length of protein sequences within clusters ensures that multi-domain proteins when present can be templates for sequences of similar length. This annotation procedure includes the possibility of reliably transferring statistically validated functions and structures to sequences considering information available in the present data bases of molecular functions and structures.
Resumo:
The presented study carried out an analysis on rural landscape changes. In particular the study focuses on the understanding of driving forces acting on the rural built environment using a statistical spatial model implemented through GIS techniques. It is well known that the study of landscape changes is essential for a conscious decision making in land planning. From a bibliography review results a general lack of studies dealing with the modeling of rural built environment and hence a theoretical modelling approach for such purpose is needed. The advancement in technology and modernity in building construction and agriculture have gradually changed the rural built environment. In addition, the phenomenon of urbanization of a determined the construction of new volumes that occurred beside abandoned or derelict rural buildings. Consequently there are two types of transformation dynamics affecting mainly the rural built environment that can be observed: the conversion of rural buildings and the increasing of building numbers. It is the specific aim of the presented study to propose a methodology for the development of a spatial model that allows the identification of driving forces that acted on the behaviours of the building allocation. In fact one of the most concerning dynamic nowadays is related to an irrational expansion of buildings sprawl across landscape. The proposed methodology is composed by some conceptual steps that cover different aspects related to the development of a spatial model: the selection of a response variable that better describe the phenomenon under study, the identification of possible driving forces, the sampling methodology concerning the collection of data, the most suitable algorithm to be adopted in relation to statistical theory and method used, the calibration process and evaluation of the model. A different combination of factors in various parts of the territory generated favourable or less favourable conditions for the building allocation and the existence of buildings represents the evidence of such optimum. Conversely the absence of buildings expresses a combination of agents which is not suitable for building allocation. Presence or absence of buildings can be adopted as indicators of such driving conditions, since they represent the expression of the action of driving forces in the land suitability sorting process. The existence of correlation between site selection and hypothetical driving forces, evaluated by means of modeling techniques, provides an evidence of which driving forces are involved in the allocation dynamic and an insight on their level of influence into the process. GIS software by means of spatial analysis tools allows to associate the concept of presence and absence with point futures generating a point process. Presence or absence of buildings at some site locations represent the expression of these driving factors interaction. In case of presences, points represent locations of real existing buildings, conversely absences represent locations were buildings are not existent and so they are generated by a stochastic mechanism. Possible driving forces are selected and the existence of a causal relationship with building allocations is assessed through a spatial model. The adoption of empirical statistical models provides a mechanism for the explanatory variable analysis and for the identification of key driving variables behind the site selection process for new building allocation. The model developed by following the methodology is applied to a case study to test the validity of the methodology. In particular the study area for the testing of the methodology is represented by the New District of Imola characterized by a prevailing agricultural production vocation and were transformation dynamic intensively occurred. The development of the model involved the identification of predictive variables (related to geomorphologic, socio-economic, structural and infrastructural systems of landscape) capable of representing the driving forces responsible for landscape changes.. The calibration of the model is carried out referring to spatial data regarding the periurban and rural area of the study area within the 1975-2005 time period by means of Generalised linear model. The resulting output from the model fit is continuous grid surface where cells assume values ranged from 0 to 1 of probability of building occurrences along the rural and periurban area of the study area. Hence the response variable assesses the changes in the rural built environment occurred in such time interval and is correlated to the selected explanatory variables by means of a generalized linear model using logistic regression. Comparing the probability map obtained from the model to the actual rural building distribution in 2005, the interpretation capability of the model can be evaluated. The proposed model can be also applied to the interpretation of trends which occurred in other study areas, and also referring to different time intervals, depending on the availability of data. The use of suitable data in terms of time, information, and spatial resolution and the costs related to data acquisition, pre-processing, and survey are among the most critical aspects of model implementation. Future in-depth studies can focus on using the proposed model to predict short/medium-range future scenarios for the rural built environment distribution in the study area. In order to predict future scenarios it is necessary to assume that the driving forces do not change and that their levels of influence within the model are not far from those assessed for the time interval used for the calibration.
Resumo:
Apple consumption is highly recomended for a healthy diet and is the most important fruit produced in temperate climate regions. Unfortunately, it is also one of the fruit that most ofthen provoks allergy in atopic patients and the only treatment available up to date for these apple allergic patients is the avoidance. Apple allergy is due to the presence of four major classes of allergens: Mal d 1 (PR-10/Bet v 1-like proteins), Mal d 2 (Thaumatine-like proteins), Mal d 3 (Lipid transfer protein) and Mal d 4 (profilin). In this work new advances in the characterization of apple allergen gene families have been reached using a multidisciplinary approach. First of all, a genomic approach was used for the characterization of the allergen gene families of Mal d 1 (task of Chapter 1), Mal d 2 and Mal d 4 (task of Chapter 5). In particular, in Chapter 1 the study of two large contiguos blocks of DNA sequences containing the Mal d 1 gene cluster on LG16 allowed to acquire many new findings on number and orientation of genes in the cluster, their physical distances, their regulatory sequences and the presence of other genes or pseudogenes in this genomic region. Three new members were discovered co-localizing with the other Mal d 1 genes of LG16 suggesting that the complexity of the genetic base of allergenicity will increase with new advances. Many retrotranspon elements were also retrieved in this cluster. Due to the developement of molecular markers on the two sequences, the anchoring of the physical and the genetic map of the region has been successfully achieved. Moreover, in Chapter 5 the existence of other loci for the Thaumatine-like protein family in apple (Mal d 2.03 on LG4 and Mal d 2.02 on LG17) respect the one reported up to now was demonstred for the first time. Also one new locus for profilins (Mal d 4.04) was mapped on LG2, close to the Mal d 4.02 locus, suggesting a cluster organization for this gene family, as is well reported for Mal d 1 family. Secondly, a methodological approach was used to set up an highly specific tool to discriminate and quantify the expression of each Mal d 1 allergen gene (task of Chapter 2). In aprticular, a set of 20 Mal d 1 gene specific primer pairs for the quantitative Real time PCR technique was validated and optimized. As a first application, this tool was used on leaves and fruit tissues of the cultivar Florina in order to identify the Mal d 1 allergen genes that are expressed in different tissues. The differential expression retrieved in this study revealed a tissue-specificity for some Mal d 1 genes: 10/20 Mal d 1 genes were expressed in fruits and, indeed, probably more involved in the allergic reactions; while 17/20 Mal d 1 genes were expressed in leaves challenged with the fungus Venturia inaequalis and therefore probably interesting in the study of the plant defense mechanism. In Chapter 3 the specific expression levels of the 10 Mal d 1 isoallergen genes, found to be expressed in fruits, were studied for the first time in skin and flesh of apples of different genotypes. A complex gene expression profile was obtained due to the high gene-, tissue- and genotype-variability. Despite this, Mal d 1.06A and Mal d 1.07 expression patterns resulted particularly associated with the degree of allergenicity of the different cultivars. They were not the most expressed Mal d 1 genes in apple but here it was hypotized a relevant importance in the determination of allergenicity for both qualitative and quantitative aspects of the Mal d 1 gene expression levels. In Chapter 4 a clear modulation for all the 17 PR-10 genes tested in young leaves of Florina after challenging with the fungus V. inaequalis have been reported but with a peculiar expression profile for each gene. Interestingly, all the Mal d 1 genes resulted up-regulated except Mal d 1.10 that was down-regulated after the challenging with the fungus. The differences in direction, timing and magnitude of induction seem to confirm the hypothesis of a subfunctionalization inside the gene family despite an high sequencce and structure similarity. Moreover, a modulation of PR-10 genes was showed both in compatible (Gala-V. inaequalis) and incompatible (Florina-V. inaequalis) interactions contribute to validate the hypothesis of an indirect role for at least some of these proteins in the induced defense responses. Finally, a certain modulation of PR-10 transcripts retrieved also in leaves treated with water confirm their abilty to respond also to abiotic stress. To conclude, the genomic approach used here allowed to create a comprehensive inventory of all the genes of allergen families, especially in the case of extended gene families like Mal d 1. This knowledge can be considered a basal prerequisite for many further studies. On the other hand, the specific transcriptional approach make it possible to evaluate the Mal d 1 genes behavior on different samples and conditions and therefore, to speculate on their involvement on apple allergenicity process. Considering the double nature of Mal d 1 proteins, as apple allergens and as PR-10 proteins, the gene expression analysis upon the attack of the fungus created the base for unravel the Mal d 1 biological functions. In particular, the knowledge acquired in this work about the PR-10 genes putatively more involved in the specific Malus-V. inaequalis interaction will be helpful, in the future, to drive the apple breeding for hypo-allergenicity genotype without compromise the mechanism of response of the plants to stress conditions. For the future, the survey of the differences in allergenicity among cultivars has to be be thorough including other genotypes and allergic patients in the tests. After this, the allelic diversity analysis with the high and low allergenic cultivars on all the allergen genes, in particular on the ones with transcription levels correlated to allergencity, will provide the genetic background of the low ones. This step from genes to alleles will allow the develop of molecular markers for them that might be used to effectively addressed the apple breeding for hypo-allergenicity. Another important step forward for the study of apple allergens will be the use of a specific proteomic approach since apple allergy is a multifactor-determined disease and only an interdisciplinary and integrated approach can be effective for its prevention and treatment.
Resumo:
In the present work we perform an econometric analysis of the Tribal art market. To this aim, we use a unique and original database that includes information on Tribal art market auctions worldwide from 1998 to 2011. In Literature, art prices are modelled through the hedonic regression model, a classic fixed-effect model. The main drawback of the hedonic approach is the large number of parameters, since, in general, art data include many categorical variables. In this work, we propose a multilevel model for the analysis of Tribal art prices that takes into account the influence of time on artwork prices. In fact, it is natural to assume that time exerts an influence over the price dynamics in various ways. Nevertheless, since the set of objects change at every auction date, we do not have repeated measurements of the same items over time. Hence, the dataset does not constitute a proper panel; rather, it has a two-level structure in that items, level-1 units, are grouped in time points, level-2 units. The main theoretical contribution is the extension of classical multilevel models to cope with the case described above. In particular, we introduce a model with time dependent random effects at the second level. We propose a novel specification of the model, derive the maximum likelihood estimators and implement them through the E-M algorithm. We test the finite sample properties of the estimators and the validity of the own-written R-code by means of a simulation study. Finally, we show that the new model improves considerably the fit of the Tribal art data with respect to both the hedonic regression model and the classic multilevel model.
Resumo:
This thesis deals with the study of optimal control problems for the incompressible Magnetohydrodynamics (MHD) equations. Particular attention to these problems arises from several applications in science and engineering, such as fission nuclear reactors with liquid metal coolant and aluminum casting in metallurgy. In such applications it is of great interest to achieve the control on the fluid state variables through the action of the magnetic Lorentz force. In this thesis we investigate a class of boundary optimal control problems, in which the flow is controlled through the boundary conditions of the magnetic field. Due to their complexity, these problems present various challenges in the definition of an adequate solution approach, both from a theoretical and from a computational point of view. In this thesis we propose a new boundary control approach, based on lifting functions of the boundary conditions, which yields both theoretical and numerical advantages. With the introduction of lifting functions, boundary control problems can be formulated as extended distributed problems. We consider a systematic mathematical formulation of these problems in terms of the minimization of a cost functional constrained by the MHD equations. The existence of a solution to the flow equations and to the optimal control problem are shown. The Lagrange multiplier technique is used to derive an optimality system from which candidate solutions for the control problem can be obtained. In order to achieve the numerical solution of this system, a finite element approximation is considered for the discretization together with an appropriate gradient-type algorithm. A finite element object-oriented library has been developed to obtain a parallel and multigrid computational implementation of the optimality system based on a multiphysics approach. Numerical results of two- and three-dimensional computations show that a possible minimum for the control problem can be computed in a robust and accurate manner.