912 resultados para finite and infinitesimal models
Resumo:
The effects of the Miocene through Present compression in the Tagus Abyssal Plain are mapped using the most up to date available to scientific community multi-channel seismic reflection and refraction data. Correlation of the rift basin fault pattern with the deep crustal structure is presented along seismic line IAM-5. Four structural domains were recognized. In the oceanic realm mild deformation concentrates in Domain I adjacent to the Tore-Madeira Rise. Domain 2 is characterized by the absence of shortening structures, except near the ocean-continent transition (OCT), implying that Miocene deformation did not propagate into the Abyssal Plain, In Domain 3 we distinguish three sub-domains: Sub-domain 3A which coincides with the OCT, Sub-domain 3B which is a highly deformed adjacent continental segment, and Sub-domain 3C. The Miocene tectonic inversion is mainly accommodated in Domain 3 by oceanwards directed thrusting at the ocean-continent transition and continentwards on the continental slope. Domain 4 corresponds to the non-rifted continental margin where only minor extensional and shortening deformation structures are observed. Finite element numerical models address the response of the various domains to the Miocene compression, emphasizing the long-wavelength differential vertical movements and the role of possible rheologic contrasts. The concentration of the Miocene deformation in the transitional zone (TC), which is the addition of Sub-domain 3A and part of 3B, is a result of two main factors: (1) focusing of compression in an already stressed region due to plate curvature and sediment loading; and (2) theological weakening. We estimate that the frictional strength in the TC is reduced in 30% relative to the surrounding regions. A model of compressive deformation propagation by means of horizontal impingement of the middle continental crust rift wedge and horizontal shearing on serpentinized mantle in the oceanic realm is presented. This model is consistent with both the geological interpretation of seismic data and the results of numerical modelling.
Resumo:
Functionally graded materials are composite materials wherein the composition of the constituent phases can vary in a smooth continuous way with a gradation which is function of its spatial coordinates. This characteristic proves to be an important issue as it can minimize abrupt variations of the material properties which are usually responsible for localized high values of stresses, and simultaneously providing an effective thermal barrier in specific applications. In the present work, it is studied the static and free vibration behaviour of functionally graded sandwich plate type structures, using B-spline finite strip element models based on different shear deformation theories. The effective properties of functionally graded materials are estimated according to Mori-Tanaka homogenization scheme. These sandwich structures can also consider the existence of outer skins of piezoelectric materials, thus achieving them adaptive characteristics. The performance of the models, are illustrated through a set of test cases. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
We discuss theoretical and phenomenological aspects of two-Higgs-doublet extensions of the Standard Model. In general, these extensions have scalar mediated flavour changing neutral currents which are strongly constrained by experiment. Various strategies are discussed to control these flavour changing scalar currents and their phenomenological consequences are analysed. In particular, scenarios with natural flavour conservation are investigated, including the so-called type I and type II models as well as lepton-specific and inert models. Type III models are then discussed, where scalar flavour changing neutral currents are present at tree level, but are suppressed by either a specific ansatz for the Yukawa couplings or by the introduction of family symmetries leading to a natural suppression mechanism. We also consider the phenomenology of charged scalars in these models. Next we turn to the role of symmetries in the scalar sector. We discuss the six symmetry-constrained scalar potentials and their extension into the fermion sector. The vacuum structure of the scalar potential is analysed, including a study of the vacuum stability conditions on the potential and the renormalization-group improvement of these conditions is also presented. The stability of the tree level minimum of the scalar potential in connection with electric charge conservation and its behaviour under CP is analysed. The question of CP violation is addressed in detail, including the cases of explicit CP violation and spontaneous CP violation. We present a detailed study of weak basis invariants which are odd under CP. These invariants allow for the possibility of studying the CP properties of any two-Higgs-doublet model in an arbitrary Higgs basis. A careful study of spontaneous CP violation is presented, including an analysis of the conditions which have to be satisfied in order for a vacuum to violate CP. We present minimal models of CP violation where the vacuum phase is sufficient to generate a complex CKM matrix, which is at present a requirement for any realistic model of spontaneous CP violation.
Resumo:
Beam-like structures are the most common components in real engineering, while single side damage is often encountered. In this study, a numerical analysis of single side damage in a free-free beam is analysed with three different finite element models; namely solid, shell and beam models for demonstrating their performance in simulating real structures. Similar to experiment, damage is introduced into one side of the beam, and natural frequencies are extracted from the simulations and compared with experimental and analytical results. Mode shapes are also analysed with modal assurance criterion. The results from simulations reveal a good performance of the three models in extracting natural frequencies, and solid model performs better than shell while shell model performs better than beam model under intact state. For damaged states, the natural frequencies captured from solid model show more sensitivity to damage severity than shell model and shell model performs similar to the beam model in distinguishing damage. The main contribution of this paper is to perform a comparison between three finite element models and experimental data as well as analytical solutions. The finite element results show a relatively well performance.
Resumo:
The life of humans and most living beings depend on sensation and perception for the best assessment of the surrounding world. Sensorial organs acquire a variety of stimuli that are interpreted and integrated in our brain for immediate use or stored in memory for later recall. Among the reasoning aspects, a person has to decide what to do with available information. Emotions are classifiers of collected information, assigning a personal meaning to objects, events and individuals, making part of our own identity. Emotions play a decisive role in cognitive processes as reasoning, decision and memory by assigning relevance to collected information. The access to pervasive computing devices, empowered by the ability to sense and perceive the world, provides new forms of acquiring and integrating information. But prior to data assessment on its usefulness, systems must capture and ensure that data is properly managed for diverse possible goals. Portable and wearable devices are now able to gather and store information, from the environment and from our body, using cloud based services and Internet connections. Systems limitations in handling sensorial data, compared with our sensorial capabilities constitute an identified problem. Another problem is the lack of interoperability between humans and devices, as they do not properly understand human’s emotional states and human needs. Addressing those problems is a motivation for the present research work. The mission hereby assumed is to include sensorial and physiological data into a Framework that will be able to manage collected data towards human cognitive functions, supported by a new data model. By learning from selected human functional and behavioural models and reasoning over collected data, the Framework aims at providing evaluation on a person’s emotional state, for empowering human centric applications, along with the capability of storing episodic information on a person’s life with physiologic indicators on emotional states to be used by new generation applications.
Resumo:
The design of anchorage blisters of internal continuity post-tensioning tendons of bridges built by the cantilever method, presents some peculiarities, not only because they are intermediate anchorages but also because these anchorages are located in blisters, so the prestressing force has to be transferred from the blister the bottom slab and web of the girder. The high density of steel reinforcement in anchorage blisters is the most common reason for problems with concrete cast in situ, resulting in zones with low concrete compacity, leading to concrete crushing failures under the anchor plates. A solution may involve improving the concrete compression and tensile strength. To meet these requirements a high-performance fibre reinforced self-compacting mix- ture (HPFRC) was used in anchorage corner blisters of post-tensioning tendons, reducing the concrete cross-section and decreasing the reinforcement needed. To assess the ultimate capacity and the adequate serviceability of the local anchorage zone after reducing the minimum concrete cross-section and the confining reinforcement, specified by the anchorage device supplier for the particular tendon, load transfer tests were performed. To investigate the behaviour of anchorage blisters regarding the transmission of stresses to the web and the bottom slab of the girder, and the feasibility of using high performance concrete only in the blister, two half scale models of the inferior corner of a box girder existing bridge were studied: a reference specimen of ordinary reinforced concrete and a HPFRC blister specimen. The design of the reinforcement was based in the tensile forces obtained on strut-and-tie models. An experimental program was carried out to assess the models used in design and to study the feasibility of using high performance concrete only in the blister, either with casting in situ, or with precast solutions. A non-linear finite element analysis of the tested specimens was also performed and the results compared.
Resumo:
Composite materials have a complex behavior, which is difficult to predict under different types of loads. In the course of this dissertation a methodology was developed to predict failure and damage propagation of composite material specimens. This methodology uses finite element numerical models created with Ansys and Matlab softwares. The methodology is able to perform an incremental-iterative analysis, which increases, gradually, the load applied to the specimen. Several structural failure phenomena are considered, such as fiber and/or matrix failure, delamination or shear plasticity. Failure criteria based on element stresses were implemented and a procedure to reduce the stiffness of the failed elements was prepared. The material used in this dissertation consist of a spread tow carbon fabric with a 0°/90° arrangement and the main numerical model analyzed is a 26-plies specimen under compression loads. Numerical results were compared with the results of specimens tested experimentally, whose mechanical properties are unknown, knowing only the geometry of the specimen. The material properties of the numerical model were adjusted in the course of this dissertation, in order to find the lowest difference between the numerical and experimental results with an error lower than 5% (it was performed the numerical model identification based on the experimental results).
Resumo:
Both culture coverage and digital journalism are contemporary phenomena that have undergone several transformations within a short period of time. Whenever the media enters a period of uncertainty such as the present one, there is an attempt to innovate in order to seek sustainability, skip the crisis or find a new public. This indicates that there are new trends to be understood and explored, i.e., how are media innovating in a digital environment? Not only does the professional debate about the future of journalism justify the need to explore the issue, but so do the academic approaches to cultural journalism. However, none of the studies so far have considered innovation as a motto or driver and tried to explain how the media are covering culture, achieving sustainability and engaging with the readers in a digital environment. This research examines how European media which specialize in culture or have an important cultural section are innovating in a digital environment. Specifically, we see how these innovation strategies are being taken in relation to the approach to culture and dominant cultural areas, editorial models, the use of digital tools for telling stories, overall brand positioning and extensions, engagement with the public and business models. We conducted a mixed methods study combining case studies of four media projects, which integrates qualitative web features and content analysis, with quantitative web content analysis. Two major general-interest journalistic brands which started as physical newspapers – The Guardian (London, UK) and Público (Lisbon, Portugal) – a magazine specialized in international affairs, culture and design – Monocle (London, UK) – and a native digital media project that was launched by a cultural organization – Notodo, by La Fábrica – were the four case studies chosen. Findings suggest, on one hand, that we are witnessing a paradigm shift in culture coverage in a digital environment, challenging traditional boundaries related to cultural themes and scope, angles, genres, content format and delivery, engagement and business models. Innovation in the four case studies lies especially along the product dimensions (format and content), brand positioning and process (business model and ways to engage with users). On the other hand, there are still perennial values that are crucial to innovation and sustainability, such as commitment to journalism, consistency (to the reader, to brand extensions and to the advertiser), intelligent differentiation and the capability of knowing what innovation means and how it can be applied, since this thesis also confirms that one formula doesn´t suit all. Changing minds, exceeding cultural inertia and optimizing the memory of the websites, looking at them as living, organic bodies, which continuously interact with the readers in many different ways, and not as a closed collection of articles, are still the main challenges for some media.
Resumo:
We study the longitudinal and transverse spin dynamical structure factors of the spin-1/2 XXX chain at finite magnetic field h, focusing in particular on the singularities at excitation energies in the vicinity of the lower thresholds. While the static properties of the model can be studied within a Fermi-liquid like description in terms of pseudoparticles, our derivation of the dynamical properties relies on the introduction of a form of the ‘pseudofermion dynamical theory’ (PDT) of the 1D Hubbard model suitably modified for the spin-only XXX chain and other models with two pseudoparticle Fermi points. Specifically, we derive the exact momentum and spin-density dependences of the exponents ζτ(k) controlling the singularities for both the longitudinal  and transverse (τ = t) dynamical structure factors for the whole momentum range  , in the thermodynamic limit. This requires the numerical solution of the integral equations that define the phase shifts in these exponents expressions. We discuss the relation to neutron scattering and suggest new experiments on spin-chain compounds using a carefully oriented crystal to test our predictions.
Resumo:
Heart tissue inflammation, progressive fibrosis and electrocardiographic alterations occur in approximately 30% of patients infected by Trypanosoma cruzi, 10-30 years after infection. Further, plasma levels of tumour necrosis factor (TNF) and nitric oxide (NO) are associated with the degree of heart dysfunction in chronic chagasic cardiomyopathy (CCC). Thus, our aim was to establish experimental models that mimic a range of parasitological, pathological and cardiac alterations described in patients with chronic Chagas’ heart disease and evaluate whether heart disease severity was associated with increased TNF and NO levels in the serum. Our results show that C3H/He mice chronically infected with the Colombian T. cruzi strain have more severe cardiac parasitism and inflammation than C57BL/6 mice. In addition, connexin 43 disorganisation and fibronectin deposition in the heart tissue, increased levels of creatine kinase cardiac MB isoenzyme activity in the serum and more severe electrical abnormalities were observed in T. cruzi-infected C3H/He mice compared to C57BL/6 mice. Therefore, T. cruzi-infected C3H/He and C57BL/6 mice represent severe and mild models of CCC, respectively. Moreover, the CCC severity paralleled the TNF and NO levels in the serum. Therefore, these models are appropriate for studying the pathophysiology and biomarkers of CCC progression, as well as for testing therapeutic agents for patients with Chagas’ heart disease.
Resumo:
Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.
Resumo:
The Helvetic nappe system in Western Switzerland is a stack of fold nappes and thrust sheets em-placed at low grade metamorphism. Fold nappes and thrust sheets are also some of the most common features in orogens. Fold nappes are kilometer scaled recumbent folds which feature a weakly deformed normal limb and an intensely deformed overturned limb. Thrust sheets on the other hand are characterized by the absence of overturned limb and can be defined as almost rigid blocks of crust that are displaced sub-horizontally over up to several tens of kilometers. The Morcles and Doldenhom nappe are classic examples of fold nappes and constitute the so-called infra-Helvetic complex in Western and Central Switzerland, respectively. This complex is overridden by thrust sheets such as the Diablerets and Wildhörn nappes in Western Switzerland. One of the most famous example of thrust sheets worldwide is the Glariis thrust sheet in Central Switzerland which features over 35 kilometers of thrusting which are accommodated by a ~1 m thick shear zone. Since the works of the early Alpine geologist such as Heim and Lugeon, the knowledge of these nappes has been steadily refined and today the geometry and kinematics of the Helvetic nappe system is generally agreed upon. However, despite the extensive knowledge we have today of the kinematics of fold nappes and thrust sheets, the mechanical process leading to the emplacement of these nappe is still poorly understood. For a long time geologist were facing the so-called 'mechanical paradox' which arises from the fact that a block of rock several kilometers high and tens of kilometers long (i.e. nappe) would break internally rather than start moving on a low angle plane. Several solutions were proposed to solve this apparent paradox. Certainly the most successful is the theory of critical wedges (e.g. Chappie 1978; Dahlen, 1984). In this theory the orogen is considered as a whole and this change of scale allows thrust sheet like structures to form while being consistent with mechanics. However this theoiy is intricately linked to brittle rheology and fold nappes, which are inherently ductile structures, cannot be created in these models. When considering the problem of nappe emplacement from the perspective of ductile rheology the problem of strain localization arises. The aim of this thesis was to develop and apply models based on continuum mechanics and integrating heat transfer to understand the emplacement of nappes. Models were solved either analytically or numerically. In the first two papers of this thesis we derived a simple model which describes channel flow in a homogeneous material with temperature dependent viscosity. We applied this model to the Morcles fold nappe and to several kilometer-scale shear zones worldwide. In the last paper we zoomed out and studied the tectonics of (i) ductile and (ii) visco-elasto-plastic and temperature dependent wedges. In this last paper we focused on the relationship between basement and cover deformation. We demonstrated that during the compression of a ductile passive margin both fold nappes and thrust sheets can develop and that these apparently different structures constitute two end-members of a single structure (i.e. nappe). The transition from fold nappe to thrust sheet is to first order controlled by the deformation of the basement. -- Le système des nappes helvétiques en Suisse occidentale est un empilement de nappes de plis et de nappes de charriage qui se sont mis en place à faible grade métamorphique. Les nappes de plis et les nappes de charriage sont parmi les objets géologiques les plus communs dans les orogènes. Les nappes de plis sont des plis couchés d'échelle kilométrique caractérisés par un flanc normal faiblement défor-mé, au contraire de leur flanc inverse, intensément déformé. Les nappes de charriage, à l'inverse se caractérisent par l'absence d'un flanc inverse bien défini. Elles peuvent être définies comme des blocs de croûte terrestre qui se déplacent de manière presque rigide qui sont déplacés sub-horizontalement jusqu'à plusieurs dizaines de kilomètres. La nappe de Mordes et la nappe du Doldenhorn sont des exemples classiques de nappes de plis et constitue le complexe infra-helvétique en Suisse occidentale et centrale, respectivement. Ce complexe repose sous des nappes de charriages telles les nappes des Diablerets et du Widlhörn en Suisse occidentale. La nappe du Glariis en Suisse centrale se distingue par un déplacement de plus de 35 kilomètres qui s'est effectué à la faveur d'une zone de cisaillement basale épaisse de seulement 1 mètre. Aujourd'hui la géométrie et la cinématique des nappes alpines fait l'objet d'un consensus général. Malgré cela, les processus mécaniques par lesquels ces nappes se sont mises en place restent mal compris. Pendant toute la première moitié du vingtième siècle les géologues les géologues ont été confrontés au «paradoxe mécanique». Celui-ci survient du fait qu'un bloc de roche haut de plusieurs kilomètres et long de plusieurs dizaines de kilomètres (i.e., une nappe) se fracturera de l'intérieur plutôt que de se déplacer sur une surface frictionnelle. Plusieurs solutions ont été proposées pour contourner cet apparent paradoxe. La solution la plus populaire est la théorie des prismes d'accrétion critiques (par exemple Chappie, 1978 ; Dahlen, 1984). Dans le cadre de cette théorie l'orogène est considéré dans son ensemble et ce simple changement d'échelle solutionne le paradoxe mécanique (la fracturation interne de l'orogène correspond aux nappes). Cette théorie est étroitement lié à la rhéologie cassante et par conséquent des nappes de plis ne peuvent pas créer au sein d'un prisme critique. Le but de cette thèse était de développer et d'appliquer des modèles basés sur la théorie de la méca-nique des milieux continus et sur les transferts de chaleur pour comprendre l'emplacement des nappes. Ces modèles ont été solutionnés de manière analytique ou numérique. Dans les deux premiers articles présentés dans ce mémoire nous avons dérivé un modèle d'écoulement dans un chenal d'un matériel homogène dont la viscosité dépend de la température. Nous avons appliqué ce modèle à la nappe de Mordes et à plusieurs zone de cisaillement d'échelle kilométrique provenant de différents orogènes a travers le monde. Dans le dernier article nous avons considéré le problème à l'échelle de l'orogène et avons étudié la tectonique de prismes (i) ductiles, et (ii) visco-élasto-plastiques en considérant les transferts de chaleur. Nous avons démontré que durant la compression d'une marge passive ductile, a la fois des nappes de plis et des nappes de charriages peuvent se développer. Nous avons aussi démontré que nappes de plis et de charriages sont deux cas extrêmes d'une même structure (i.e. nappe) La transition entre le développement d'une nappe de pli ou d'une nappe de charriage est contrôlé au premier ordre par la déformation du socle. -- Le système des nappes helvétiques en Suisse occidentale est un emblement de nappes de plis et de nappes de chaînage qui se sont mis en place à faible grade métamoiphique. Les nappes de plis et les nappes de charriage sont parmi les objets géologiques les plus communs dans les orogènes. Les nappes de plis sont des plis couchés d'échelle kilométrique caractérisés par un flanc normal faiblement déformé, au contraire de leur flanc inverse, intensément déformé. Les nappes de charriage, à l'inverse se caractérisent par l'absence d'un flanc inverse bien défini. Elles peuvent être définies comme des blocs de croûte terrestre qui se déplacent de manière presque rigide qui sont déplacés sub-horizontalement jusqu'à plusieurs dizaines de kilomètres. La nappe de Morcles and la nappe du Doldenhorn sont des exemples classiques de nappes de plis et constitue le complexe infra-helvétique en Suisse occidentale et centrale, respectivement. Ce complexe repose sous des nappes de charriages telles les nappes des Diablerets et du Widlhörn en Suisse occidentale. La nappe du Glarüs en Suisse centrale est certainement l'exemple de nappe de charriage le plus célèbre au monde. Elle se distingue par un déplacement de plus de 35 kilomètres qui s'est effectué à la faveur d'une zone de cisaillement basale épaisse de seulement 1 mètre. La géométrie et la cinématique des nappes alpines fait l'objet d'un consensus général parmi les géologues. Au contraire les processus physiques par lesquels ces nappes sont mises en place reste mal compris. Les sédiments qui forment les nappes alpines se sont déposés à l'ère secondaire et à l'ère tertiaire sur le socle de la marge européenne qui a été étiré durant l'ouverture de l'océan Téthys. Lors de la fermeture de la Téthys, qui donnera naissance aux Alpes, le socle et les sédiments de la marge européenne ont été déformés pour former les nappes alpines. Le but de cette thèse était de développer et d'appliquer des modèles basés sur la théorie de la mécanique des milieux continus et sur les transferts de chaleur pour comprendre l'emplacement des nappes. Ces modèles ont été solutionnés de manière analytique ou numérique. Dans les deux premiers articles présentés dans ce mémoire nous nous sommes intéressés à la localisation de la déformation à l'échelle d'une nappe. Nous avons appliqué le modèle développé à la nappe de Morcles et à plusieurs zones de cisaillement provenant de différents orogènes à travers le monde. Dans le dernier article nous avons étudié la relation entre la déformation du socle et la défonnation des sédiments. Nous avons démontré que nappe de plis et nappes de charriages constituent les cas extrêmes d'un continuum. La transition entre nappe de pli et nappe de charriage est intrinsèquement lié à la déformation du socle sur lequel les sédiments reposent.
Resumo:
Species' geographic ranges are usually considered as basic units in macroecology and biogeography, yet it is still difficult to measure them accurately for many reasons. About 20 years ago, researchers started using local data on species' occurrences to estimate broad scale ranges, thereby establishing the niche modeling approach. However, there are still many problems in model evaluation and application, and one of the solutions is to find a consensus solution among models derived from different mathematical and statistical models for niche modeling, climatic projections and variable combination, all of which are sources of uncertainty during niche modeling. In this paper, we discuss this approach of ensemble forecasting and propose that it can be divided into three phases with increasing levels of complexity. Phase I is the simple combination of maps to achieve a consensual and hopefully conservative solution. In Phase II, differences among the maps used are described by multivariate analyses, and Phase III consists of the quantitative evaluation of the relative magnitude of uncertainties from different sources and their mapping. To illustrate these developments, we analyzed the occurrence data of the tiger moth, Utetheisa ornatrix (Lepidoptera, Arctiidae), a Neotropical moth species, and modeled its geographic range in current and future climates.
Resumo:
The objective of this paper is to compare the performance of twopredictive radiological models, logistic regression (LR) and neural network (NN), with five different resampling methods. One hundred and sixty-seven patients with proven calvarial lesions as the only known disease were enrolled. Clinical and CT data were used for LR and NN models. Both models were developed with cross validation, leave-one-out and three different bootstrap algorithms. The final results of each model were compared with error rate and the area under receiver operating characteristic curves (Az). The neural network obtained statistically higher Az than LR with cross validation. The remaining resampling validation methods did not reveal statistically significant differences between LR and NN rules. The neural network classifier performs better than the one based on logistic regression. This advantage is well detected by three-fold cross-validation, but remains unnoticed when leave-one-out or bootstrap algorithms are used.
Resumo:
Statistical models allow the representation of data sets and the estimation and/or prediction of the behavior of a given variable through its interaction with the other variables involved in a phenomenon. Among other different statistical models, are the autoregressive state-space models (ARSS) and the linear regression models (LR), which allow the quantification of the relationships among soil-plant-atmosphere system variables. To compare the quality of the ARSS and LR models for the modeling of the relationships between soybean yield and soil physical properties, Akaike's Information Criterion, which provides a coefficient for the selection of the best model, was used in this study. The data sets were sampled in a Rhodic Acrudox soil, along a spatial transect with 84 points spaced 3 m apart. At each sampling point, soybean samples were collected for yield quantification. At the same site, soil penetration resistance was also measured and soil samples were collected to measure soil bulk density in the 0-0.10 m and 0.10-0.20 m layers. Results showed autocorrelation and a cross correlation structure of soybean yield and soil penetration resistance data. Soil bulk density data, however, were only autocorrelated in the 0-0.10 m layer and not cross correlated with soybean yield. The results showed the higher efficiency of the autoregressive space-state models in relation to the equivalent simple and multiple linear regression models using Akaike's Information Criterion. The resulting values were comparatively lower than the values obtained by the regression models, for all combinations of explanatory variables.