969 resultados para Semi-parametric models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Civil e Ambiental, 2015.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We estimate a dynamic model of mortgage default for a cohort of Colombian debtors between 1997 and 2004. We use the estimated model to study the effects on default of a class of policies that affected the evolution of mortgage balances in Colombia during the 1990's. We propose a framework for estimating dynamic behavioral models accounting for the presence of unobserved state variables that are correlated across individuals and across time periods. We extend the standard literature on the structural estimation of dynamic models by incorporating an unobserved common correlated shock that affects all individuals' static payoffs and the dynamic continuation payoffs associated with different decisions. Given a standard parametric specification the dynamic problem, we show that the aggregate shocks are identified from the variation in the observed aggregate behavior. The shocks and their transition are separately identified, provided there is enough cross-sectionavl ariation of the observeds tates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The method "toe-to-heel air injection" (THAITM) is a process of enhanced oil recovery, which is the integration of in-situ combustion with technological advances in drilling horizontal wells. This method uses horizontal wells as producers of oil, keeping vertical injection wells to inject air. This process has not yet been applied in Brazil, making it necessary, evaluation of these new technologies applied to local realities, therefore, this study aimed to perform a parametric study of the combustion process with in-situ oil production in horizontal wells, using a semi synthetic reservoir, with characteristics of the Brazilian Northeast basin. The simulations were performed in a commercial software "STARS" (Steam, Thermal, and Advanced Processes Reservoir Simulator), from CMG (Computer Modelling Group). The following operating parameters were analyzed: air rate, configuration of producer wells and oxygen concentration. A sensitivity study on cumulative oil (Np) was performed with the technique of experimental design, with a mixed model of two and three levels (32x22), a total of 36 runs. Also, it was done a technical economic estimative for each model of fluid. The results showed that injection rate was the most influence parameter on oil recovery, for both studied models, well arrangement depends on fluid model, and oxygen concentration favors recovery oil. The process can be profitable depends on air rate

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is concerned with change point analysis for time series, i.e. with detection of structural breaks in time-ordered, random data. This long-standing research field regained popularity over the last few years and is still undergoing, as statistical analysis in general, a transformation to high-dimensional problems. We focus on the fundamental »change in the mean« problem and provide extensions of the classical non-parametric Darling-Erdős-type cumulative sum (CUSUM) testing and estimation theory within highdimensional Hilbert space settings. In the first part we contribute to (long run) principal component based testing methods for Hilbert space valued time series under a rather broad (abrupt, epidemic, gradual, multiple) change setting and under dependence. For the dependence structure we consider either traditional m-dependence assumptions or more recently developed m-approximability conditions which cover, e.g., MA, AR and ARCH models. We derive Gumbel and Brownian bridge type approximations of the distribution of the test statistic under the null hypothesis of no change and consistency conditions under the alternative. A new formulation of the test statistic using projections on subspaces allows us to simplify the standard proof techniques and to weaken common assumptions on the covariance structure. Furthermore, we propose to adjust the principal components by an implicit estimation of a (possible) change direction. This approach adds flexibility to projection based methods, weakens typical technical conditions and provides better consistency properties under the alternative. In the second part we contribute to estimation methods for common changes in the means of panels of Hilbert space valued time series. We analyze weighted CUSUM estimates within a recently proposed »high-dimensional low sample size (HDLSS)« framework, where the sample size is fixed but the number of panels increases. We derive sharp conditions on »pointwise asymptotic accuracy« or »uniform asymptotic accuracy« of those estimates in terms of the weighting function. Particularly, we prove that a covariance-based correction of Darling-Erdős-type CUSUM estimates is required to guarantee uniform asymptotic accuracy under moderate dependence conditions within panels and that these conditions are fulfilled, e.g., by any MA(1) time series. As a counterexample we show that for AR(1) time series, close to the non-stationary case, the dependence is too strong and uniform asymptotic accuracy cannot be ensured. Finally, we conduct simulations to demonstrate that our results are practically applicable and that our methodological suggestions are advantageous.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding how virus strains offer protection against closely related emerging strains is vital for creating effective vaccines. For many viruses, including Foot-and-Mouth Disease Virus (FMDV) and the Influenza virus where multiple serotypes often co-circulate, in vitro testing of large numbers of vaccines can be infeasible. Therefore the development of an in silico predictor of cross-protection between strains is important to help optimise vaccine choice. Vaccines will offer cross-protection against closely related strains, but not against those that are antigenically distinct. To be able to predict cross-protection we must understand the antigenic variability within a virus serotype, distinct lineages of a virus, and identify the antigenic residues and evolutionary changes that cause the variability. In this thesis we present a family of sparse hierarchical Bayesian models for detecting relevant antigenic sites in virus evolution (SABRE), as well as an extended version of the method, the extended SABRE (eSABRE) method, which better takes into account the data collection process. The SABRE methods are a family of sparse Bayesian hierarchical models that use spike and slab priors to identify sites in the viral protein which are important for the neutralisation of the virus. In this thesis we demonstrate how the SABRE methods can be used to identify antigenic residues within different serotypes and show how the SABRE method outperforms established methods, mixed-effects models based on forward variable selection or l1 regularisation, on both synthetic and viral datasets. In addition we also test a number of different versions of the SABRE method, compare conjugate and semi-conjugate prior specifications and an alternative to the spike and slab prior; the binary mask model. We also propose novel proposal mechanisms for the Markov chain Monte Carlo (MCMC) simulations, which improve mixing and convergence over that of the established component-wise Gibbs sampler. The SABRE method is then applied to datasets from FMDV and the Influenza virus in order to identify a number of known antigenic residue and to provide hypotheses of other potentially antigenic residues. We also demonstrate how the SABRE methods can be used to create accurate predictions of the important evolutionary changes of the FMDV serotypes. In this thesis we provide an extended version of the SABRE method, the eSABRE method, based on a latent variable model. The eSABRE method takes further into account the structure of the datasets for FMDV and the Influenza virus through the latent variable model and gives an improvement in the modelling of the error. We show how the eSABRE method outperforms the SABRE methods in simulation studies and propose a new information criterion for selecting the random effects factors that should be included in the eSABRE method; block integrated Widely Applicable Information Criterion (biWAIC). We demonstrate how biWAIC performs equally to two other methods for selecting the random effects factors and combine it with the eSABRE method to apply it to two large Influenza datasets. Inference in these large datasets is computationally infeasible with the SABRE methods, but as a result of the improved structure of the likelihood, we are able to show how the eSABRE method offers a computational improvement, leading it to be used on these datasets. The results of the eSABRE method show that we can use the method in a fully automatic manner to identify a large number of antigenic residues on a variety of the antigenic sites of two Influenza serotypes, as well as making predictions of a number of nearby sites that may also be antigenic and are worthy of further experiment investigation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study focuses on the learning and teaching of Reading in English as a Foreign Language (REFL), in Libya. The study draws on an action research process in which I sought to look critically at students and teachers of English as a Foreign Language (EFL) in Libya as they learned and taught REFL in four Libyan research sites. The Libyan EFL educational system is influenced by two main factors: the method of teaching the Holy-Quran and the long-time ban on teaching EFL by the former Libyan regime under Muammar Gaddafi. Both of these factors have affected the learning and teaching of REFL and I outline these contextual factors in the first chapter of the thesis. This investigation, and the exploration of the challenges that Libyan university students encounter in their REFL, is supported by attention to reading models. These models helped to provide an analytical framework and starting point for understanding the many processes involved in reading for meaning and in reading to satisfy teacher instructions. The theoretical framework I adopted was based, mainly and initially, on top-down, bottom-up, interactive and compensatory interactive models. I drew on these models with a view to understanding whether and how the processes of reading described in the models could be applied to the reading of EFL students and whether these models could help me to better understand what was going on in REFL. The diagnosis stage of the study provided initial data collected from four Libyan research sites with research tools including video-recorded classroom observations, semi-structured interviews with teachers before and after lesson observation, and think-aloud protocols (TAPs) with 24 students (six from each university) in which I examined their REFL reading behaviours and strategies. This stage indicated that the majority of students shared behaviours such as reading aloud, reading each word in the text, articulating the phonemes and syllables of words, or skipping words if they could not pronounce them. Overall this first stage indicated that alternative methods of teaching REFL were needed in order to encourage ‘reading for meaning’ that might be based on strategies related to eventual interactive reading models adapted for REFL. The second phase of this research project was an Intervention Phase involving two team-teaching sessions in one of the four stage one universities. In each session, I worked with the teacher of one group to introduce an alternative method of REFL. This method was based on teaching different reading strategies to encourage the students to work towards an eventual interactive way of reading for meaning. A focus group discussion and TAPs followed the lessons with six students in order to discuss the 'new' method. Next were two video-recorded classroom observations which were followed by an audio-recorded discussion with the teacher about these methods. Finally, I conducted a Skype interview with the class teacher at the end of the semester to discuss any changes he had made in his teaching or had observed in his students' reading with respect to reading behaviour strategies, and reactions and performance of the students as he continued to use the 'new' method. The results of the intervention stage indicate that the teacher, perhaps not surprisingly, can play an important role in adding to students’ knowledge and confidence and in improving their REFL strategies. For example, after the intervention stage, students began to think about the title, and to use their own background knowledge to comprehend the text. The students employed, also, linguistic strategies such as decoding and, above all, the students abandoned the behaviour of reading for pronunciation in favour of reading for meaning. Despite the apparent efficacy of the alternative method, there are, inevitably, limitations related to the small-scale nature of the study and the time I had available to conduct the research. There are challenges, too, related to the students’ first language, the idiosyncrasies of the English language, the teacher training and continuing professional development of teachers, and the continuing political instability of Libya. The students’ lack of vocabulary and their difficulties with grammatical functions such as phrasal and prepositional verbs, forms which do not exist in Arabic, mean that REFL will always be challenging. Given such constraints, the ‘new’ methods I trialled and propose for adoption can only go so far in addressing students’ difficulties in REFL. Overall, the study indicates that the Libyan educational system is underdeveloped and under resourced with respect to REFL. My data indicates that the teacher participants have received little to no professional developmental that could help them improve their teaching in REFL and skills in teaching EFL. These circumstances, along with the perennial problem of large but varying class sizes; student, teacher and assessment expectations; and limited and often poor quality resources, affect the way EFL students learn to read in English. Against this background, the thesis concludes by offering tentative conclusions; reflections on the study, including a discussion of its limitations, and possible recommendations designed to improve REFL learning and teaching in Libyan universities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conventional web search engines are centralised in that a single entity crawls and indexes the documents selected for future retrieval, and the relevance models used to determine which documents are relevant to a given user query. As a result, these search engines suffer from several technical drawbacks such as handling scale, timeliness and reliability, in addition to ethical concerns such as commercial manipulation and information censorship. Alleviating the need to rely entirely on a single entity, Peer-to-Peer (P2P) Information Retrieval (IR) has been proposed as a solution, as it distributes the functional components of a web search engine – from crawling and indexing documents, to query processing – across the network of users (or, peers) who use the search engine. This strategy for constructing an IR system poses several efficiency and effectiveness challenges which have been identified in past work. Accordingly, this thesis makes several contributions towards advancing the state of the art in P2P-IR effectiveness by improving the query processing and relevance scoring aspects of a P2P web search. Federated search systems are a form of distributed information retrieval model that route the user’s information need, formulated as a query, to distributed resources and merge the retrieved result lists into a final list. P2P-IR networks are one form of federated search in routing queries and merging result among participating peers. The query is propagated through disseminated nodes to hit the peers that are most likely to contain relevant documents, then the retrieved result lists are merged at different points along the path from the relevant peers to the query initializer (or namely, customer). However, query routing in P2P-IR networks is considered as one of the major challenges and critical part in P2P-IR networks; as the relevant peers might be lost in low-quality peer selection while executing the query routing, and inevitably lead to less effective retrieval results. This motivates this thesis to study and propose query routing techniques to improve retrieval quality in such networks. Cluster-based semi-structured P2P-IR networks exploit the cluster hypothesis to organise the peers into similar semantic clusters where each such semantic cluster is managed by super-peers. In this thesis, I construct three semi-structured P2P-IR models and examine their retrieval effectiveness. I also leverage the cluster centroids at the super-peer level as content representations gathered from cooperative peers to propose a query routing approach called Inverted PeerCluster Index (IPI) that simulates the conventional inverted index of the centralised corpus to organise the statistics of peers’ terms. The results show a competitive retrieval quality in comparison to baseline approaches. Furthermore, I study the applicability of using the conventional Information Retrieval models as peer selection approaches where each peer can be considered as a big document of documents. The experimental evaluation shows comparative and significant results and explains that document retrieval methods are very effective for peer selection that brings back the analogy between documents and peers. Additionally, Learning to Rank (LtR) algorithms are exploited to build a learned classifier for peer ranking at the super-peer level. The experiments show significant results with state-of-the-art resource selection methods and competitive results to corresponding classification-based approaches. Finally, I propose reputation-based query routing approaches that exploit the idea of providing feedback on a specific item in the social community networks and manage it for future decision-making. The system monitors users’ behaviours when they click or download documents from the final ranked list as implicit feedback and mines the given information to build a reputation-based data structure. The data structure is used to score peers and then rank them for query routing. I conduct a set of experiments to cover various scenarios including noisy feedback information (i.e, providing positive feedback on non-relevant documents) to examine the robustness of reputation-based approaches. The empirical evaluation shows significant results in almost all measurement metrics with approximate improvement more than 56% compared to baseline approaches. Thus, based on the results, if one were to choose one technique, reputation-based approaches are clearly the natural choices which also can be deployed on any P2P network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

DEA models have been applied as the benchmarking tool in operations management to empirically account operational and productive efficiency. The wide flexibility in assigning the weights in DEA approach can result on indicators of efficiency who do not take account the relative importance of some inputs. In order to overcome this limitation, in this research we apply the DEA model under restricted weight specification. This model is applied to Spanish hotel companies in order to measure operational efficiency. The restricted weight specification enables us to decrease the influence of assigning unrealistic weights in some units and improve the efficiency estimation and to increase the discriminating potential of the conventional DEA model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new semi-implicit stress integration algorithm for finite strain plasticity (compatible with hyperelas- ticity) is introduced. Its most distinctive feature is the use of different parameterizations of equilibrium and reference configurations. Rotation terms (nonlinear trigonometric functions) are integrated explicitly and correspond to a change in the reference configuration. In contrast, relative Green–Lagrange strains (which are quadratic in terms of displacements) represent the equilibrium configuration implicitly. In addition, the adequacy of several objective stress rates in the semi-implicit context is studied. We para- metrize both reference and equilibrium configurations, in contrast with the so-called objective stress integration algorithms which use coinciding configurations. A single constitutive framework provides quantities needed by common discretization schemes. This is computationally convenient and robust, as all elements only need to provide pre-established quantities irrespectively of the constitutive model. In this work, mixed strain/stress control is used, as well as our smoothing algorithm for the complemen- tarity condition. Exceptional time-step robustness is achieved in elasto-plastic problems: often fewer than one-tenth of the typical number of time increments can be used with a quantifiable effect in accuracy. The proposed algorithm is general: all hyperelastic models and all classical elasto-plastic models can be employed. Plane-stress, Shell and 3D examples are used to illustrate the new algorithm. Both isotropic and anisotropic behavior is presented in elasto-plastic and hyperelastic examples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Silicon-based discrete high-power devices need to be designed with optimal performance up to several thousand volts and amperes to reach power ratings ranging from few kWs to beyond the 1 GW mark. To this purpose, a key element is the improvement of the junction termination (JT) since it allows to drastically reduce surface electric field peaks which may lead to an earlier device failure. This thesis will be mostly focused on the negative bevel termination which from several years constitutes a standard processing step in bipolar production lines. A simple methodology to realize its counterpart, a planar JT with variation of the lateral doping concentration (VLD) will be also described. On the JT a thin layer of a semi insulating material is usually deposited, which acts as passivation layer reducing the interface defects and contributing to increase the device reliability. A thorough understanding of how the passivation layer properties affect the breakdown voltage and the leakage current of a fast-recovery diode is fundamental to preserve the ideal termination effect and provide a stable blocking capability. More recently, amorphous carbon, also called diamond-like carbon (DLC), has been used as a robust surface passivation material. By using a commercial TCAD tool, a detailed physical explanation of DLC electrostatic and transport properties has been provided. The proposed approach is able to predict the breakdown voltage and the leakage current of a negative beveled power diode passivated with DLC as confirmed by the successfully validation against the available experiments. In addition, the VLD JT proposed to overcome the limitation of the negative bevel architecture has been simulated showing a breakdown voltage very close to the ideal one with a much smaller area consumption. Finally, the effect of a low junction depth on the formation of current filaments has been analyzed by performing reverse-recovery simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Inverse problems are at the core of many challenging applications. Variational and learning models provide estimated solutions of inverse problems as the outcome of specific reconstruction maps. In the variational approach, the result of the reconstruction map is the solution of a regularized minimization problem encoding information on the acquisition process and prior knowledge on the solution. In the learning approach, the reconstruction map is a parametric function whose parameters are identified by solving a minimization problem depending on a large set of data. In this thesis, we go beyond this apparent dichotomy between variational and learning models and we show they can be harmoniously merged in unified hybrid frameworks preserving their main advantages. We develop several highly efficient methods based on both these model-driven and data-driven strategies, for which we provide a detailed convergence analysis. The arising algorithms are applied to solve inverse problems involving images and time series. For each task, we show the proposed schemes improve the performances of many other existing methods in terms of both computational burden and quality of the solution. In the first part, we focus on gradient-based regularized variational models which are shown to be effective for segmentation purposes and thermal and medical image enhancement. We consider gradient sparsity-promoting regularized models for which we develop different strategies to estimate the regularization strength. Furthermore, we introduce a novel gradient-based Plug-and-Play convergent scheme considering a deep learning based denoiser trained on the gradient domain. In the second part, we address the tasks of natural image deblurring, image and video super resolution microscopy and positioning time series prediction, through deep learning based methods. We boost the performances of supervised, such as trained convolutional and recurrent networks, and unsupervised deep learning strategies, such as Deep Image Prior, by penalizing the losses with handcrafted regularization terms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, 3D bioprinting has emerged as an innovative and versatile technology able to produce in vitro models that resemble the native spatial organization of organ tissues, by employing or more bioinks composed of various types of cells suspended in hydrogels. Natural and semi-synthetic hydrogels are extensively used for 3D bioprinting models since they can mimic the natural composition of the tissues, they are biocompatible and bioactive with customizable mechanical properties, allowing to support cell growth. The possibility to tailor hydrogels mechanical properties by modifying the chemical structures to obtain photo-crosslinkable materials, while maintaining their biocompatibility and biomimicry, make their use versatile and suitable to simulate a broad spectrum of physiological features. In this PhD Thesis, 3D bioprinted in vitro models with tailored mechanical properties and physiologically-like features were fabricated. AlgMa-based bioinks were employed to produce a living platform with gradient stiffness, with the aim to create an easy to handle and accessible biological tool to evaluate mechanobiology. In addition, GelMa, collagen and IPN of GelMa and collagen were used as bioinks to fabricate a proof-of-concept of 3D intestinal barrier, which include multiple cell components and multi-layered structure. A useful rheological guide to drive users to the selection of the suitable bioinks for 3D bioprinting and to correlate the model’s mechanical stability after crosslinking is proposed. In conclusion, a platform capable to reproduce models with physiological gradient stiffness was developed and the fabrication of 3D bioprinted intestinal models displaying a good hierarchical structure and cells composition was fully reported and successfully achieved. The good biological results obtained demonstrated that 3D bioprinting can be used for the fabrications of 3D models and that the mechanical properties of the external environment plays a key role on the cell pathways, viability and morphology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work carried out in this thesis aims at: - studying – in both simulative and experimental methods – the effect of electrical transients (i.e., Voltage Polarity Reversals VPRs, Temporary OverVoltages TOVs, and Superimposed Switching Impulses SSIs) on the aging phenomena in HVDC extruded cable insulations. Dielectric spectroscopy, conductivity measurements, Fourier Transform Infra-Red FTIR spectroscopy, and space charge measurements show variation in the insulating properties of the aged Cross-Linked Polyethylene XLPE specimens compared to non-aged ones. Scission in XLPE bonds and formation of aging chemical bonds is also noticed in aged insulations due to possible oxidation reactions. The aged materials show more ability to accumulate space charges compared to non-aged ones. An increase in both DC electrical conductivity and imaginary permittivity has been also noticed. - The development of life-based geometric design of HVDC cables in a detailed parametric analysis of all parameters that affect the design. Furthermore, the effect of both electrical and thermal transients on the design is also investigated. - The intrinsic thermal instability in HVDC cables and the effect of insulation characteristics on the thermal stability using a temperature and field iterative loop (using numerical methods – Finite Difference Method FDM). The dielectric loss coefficient is also calculated for DC cables and found to be less than that in AC cables. This emphasizes that the intrinsic thermal instability is critical in HVDC cables. - Fitting electrical conductivity models to the experimental measurements using both models found in the literature and modified models to find the best fit by considering the synergistic effect between field and temperature coefficients of electrical conductivity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study carried out in this thesis is devoted to spectral analysis of systems of PDEs related also with quantum physics models. Namely, the research deals with classes of systems that contain certain quantum optics models such as Jaynes-Cummings, Rabi and their generalizations that describe light-matter interaction. First we investigate the spectral Weyl asymptotics for a class of semiregular systems, extending to the vector-valued case results of Helffer and Robert, and more recently of Doll, Gannot and Wunsch. Actually, the asymptotics by Doll, Gannot and Wunsch is more precise (that is why we call it refined) than the classical result by Helffer and Robert, but deals with a less general class of systems, since the authors make an hypothesis on the measure of the subset of the unit sphere on which the tangential derivatives of the X-Ray transform of the semiprincipal symbol vanish to infinity order. Abstract Next, we give a meromorphic continuation of the spectral zeta function for semiregular differential systems with polynomial coefficients, generalizing the results by Ichinose and Wakayama and Parmeggiani. Finally, we state and prove a quasi-clustering result for a class of systems including the aforementioned quantum optics models and we conclude the thesis by showing a Weyl law result for the Rabi model and its generalizations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study of the tides of a celestial bodies can unveil important information about their interior as well as their orbital evolution. The most important tidal parameter is the Love number, which defines the deformation of the gravity field due to an external perturbing body. Tidal dissipation is very important because it drives the secular orbital evolution of the natural satellites, which is even more important in the case of the the Jupiter system, where three of the Galilean moons, Io, Europa and Ganymede, are locked in an orbital resonance where the ratio of their mean motions is 4:2:1. This is called Laplace resonance. Tidal dissipation is described by the dissipation ratio k2/Q, where Q is the quality factor and it describes the dampening of a system. The goal of this thesis is to analyze and compare the two main tidal dynamical models, Mignard's model and gravity field variation model, to understand the differences between each model with a main focus on the single-moon case with Io, which can help also understanding better the differences between the two models without over complicating the dynamical model. In this work we have verified and validated both models, we have compared them and pinpointed the main differences and features that characterize each model. Mignard's model treats the tides directly as a force, while the gravity field variation model describes the tides with a change of the spherical harmonic coefficients. Finally, we have also briefly analyzed the difference between the single-moon case and the two-moon case, and we have confirmed that the governing equations that describe the change of semi-major axis and eccentricity are not good anymore when more moons are present.