886 resultados para Gaussian complexities
Resumo:
1. Digital elevation models (DEMs) are often used in landscape ecology to retrieve elevation or first derivative terrain attributes such as slope or aspect in the context of species distribution modelling. However, DEM-derived variables are scale-dependent and, given the increasing availability of very high-resolution (VHR) DEMs, their ecological relevancemust be assessed for different spatial resolutions. 2. In a study area located in the Swiss Western Alps, we computed VHR DEMs-derived variables related to morphometry, hydrology and solar radiation. Based on an original spatial resolution of 0.5 m, we generated DEM-derived variables at 1, 2 and 4 mspatial resolutions, applying a Gaussian Pyramid. Their associations with local climatic factors, measured by sensors (direct and ambient air temperature, air humidity and soil moisture) as well as ecological indicators derived fromspecies composition, were assessed with multivariate generalized linearmodels (GLM) andmixed models (GLMM). 3. Specific VHR DEM-derived variables showed significant associations with climatic factors. In addition to slope, aspect and curvature, the underused wetness and ruggedness indices modelledmeasured ambient humidity and soilmoisture, respectively. Remarkably, spatial resolution of VHR DEM-derived variables had a significant influence on models' strength, with coefficients of determination decreasing with coarser resolutions or showing a local optimumwith a 2 mresolution, depending on the variable considered. 4. These results support the relevance of using multi-scale DEM variables to provide surrogates for important climatic variables such as humidity, moisture and temperature, offering suitable alternatives to direct measurements for evolutionary ecology studies at a local scale.
Resumo:
The analysis of paraxial Gaussian beams features in most undergraduate courses in laser physics, advanced optics and photonics. These beams provide a simple model of the field generated in the resonant cavities of lasers, thus constituting a basic element for understanding laser theory. Usually, uniformly polarized beams are considered in the analytical calculations, with the electric field vibrating at normal planes to the propagation direction. However, such paraxial fields do not verify the Maxwell equations. In this paper we discuss how to overcome this apparent contradiction and evaluate the longitudinal component that any paraxial Gaussian beam should exhibit. Despite the fact that the assumption of a purely transverse paraxial field is useful and accurate, the inclusion of the above issue in the program helps students to clarify the importance of the electromagnetic nature of light, thus providing a more complete understanding of the paraxial approach.
Resumo:
PURPOSE: Iterative algorithms introduce new challenges in the field of image quality assessment. The purpose of this study is to use a mathematical model to evaluate objectively the low contrast detectability in CT. MATERIALS AND METHODS: A QRM 401 phantom containing 5 and 8 mm diameter spheres with a contrast level of 10 and 20 HU was used. The images were acquired at 120 kV with CTDIvol equal to 5, 10, 15, 20 mGy and reconstructed using the filtered back-projection (FBP), adaptive statistical iterative reconstruction 50% (ASIR 50%) and model-based iterative reconstruction (MBIR) algorithms. The model observer used is the Channelized Hotelling Observer (CHO). The channels are dense difference of Gaussian channels (D-DOG). The CHO performances were compared to the outcomes of six human observers having performed four alternative forced choice (4-AFC) tests. RESULTS: For the same CTDIvol level and according to CHO model, the MBIR algorithm gives the higher detectability index. The outcomes of human observers and results of CHO are highly correlated whatever the dose levels, the signals considered and the algorithms used when some noise is added to the CHO model. The Pearson coefficient between the human observers and the CHO is 0.93 for FBP and 0.98 for MBIR. CONCLUSION: The human observers' performances can be predicted by the CHO model. This opens the way for proposing, in parallel to the standard dose report, the level of low contrast detectability expected. The introduction of iterative reconstruction requires such an approach to ensure that dose reduction does not impair diagnostics.
Resumo:
New genes contribute substantially to adaptive evolutionary innovation, but the functional evolution of new mammalian genes has been little explored at a broad scale. Previous work established mRNA-derived gene duplicates, known as retrocopies, as models for the study of new gene origination. Here we combine mammalian transcriptomic and epigenomic data to unveil the processes underlying the evolution of stripped-down retrocopies into complex new genes. We show that although some robustly expressed retrocopies are transcribed from preexisting promoters, most evolved new promoters from scratch or recruited proto-promoters in their genomic vicinity. In particular, many retrocopy promoters emerged from ancestral enhancers (or bivalent regulatory elements) or are located in CpG islands not associated with other genes. We detected 88-280 selectively preserved retrocopies per mammalian species, illustrating that these mechanisms facilitated the birth of many functional retrogenes during mammalian evolution. The regulatory evolution of originally monoexonic retrocopies was frequently accompanied by exon gain, which facilitated co-option of distant promoters and allowed expression of alternative isoforms. While young retrogenes are often initially expressed in the testis, increased regulatory and structural complexities allowed retrogenes to functionally diversify and evolve somatic organ functions, sometimes as complex as those of their parents. Thus, some retrogenes evolved the capacity to temporarily substitute for their parents during the process of male meiotic X inactivation, while others rendered parental functions superfluous, allowing for parental gene loss. Overall, our reconstruction of the "life history" of mammalian retrogenes highlights retroposition as a general model for understanding new gene birth and functional evolution.
Resumo:
Simple Heuristics in a Social World invites readers to discover the simple heuristics that people use to navigate the complexities and surprises of environments populated with others. The social world is a terrain where humans and other animals compete with conspecifics for myriad resources, including food, mates, and status, and where rivals grant the decision maker little time for deep thought, protracted information search, or complex calculations. Yet, the social world also encompasses domains where social animals such as humans can learn from one another and can forge alliances with one another to boost their chances of success. According to the book's thesis, the undeniable complexity of the social world does not dictate cognitive complexity as many scholars of rationality argue. Rather, it entails circumstances that render optimization impossible or computationally arduous: intractability, the existence of incommensurable considerations, and competing goals. With optimization beyond reach, less can be more. That is, heuristics--simple strategies for making decisions when time is pressing and careful deliberation an unaffordable luxury--become indispensible mental tools. As accurate as or even more accurate than complex methods when used in the appropriate social environments, these heuristics are good descriptive models of how people make many decisions and inferences, but their impressive performance also poses a normative challenge for optimization models. In short, the Homo socialis may prove to be a Homo heuristicus whose intelligence reflects ecological rather than logical rationality.
Resumo:
This thesis develops a comprehensive and a flexible statistical framework for the analysis and detection of space, time and space-time clusters of environmental point data. The developed clustering methods were applied in both simulated datasets and real-world environmental phenomena; however, only the cases of forest fires in Canton of Ticino (Switzerland) and in Portugal are expounded in this document. Normally, environmental phenomena can be modelled as stochastic point processes where each event, e.g. the forest fire ignition point, is characterised by its spatial location and occurrence in time. Additionally, information such as burned area, ignition causes, landuse, topographic, climatic and meteorological features, etc., can also be used to characterise the studied phenomenon. Thereby, the space-time pattern characterisa- tion represents a powerful tool to understand the distribution and behaviour of the events and their correlation with underlying processes, for instance, socio-economic, environmental and meteorological factors. Consequently, we propose a methodology based on the adaptation and application of statistical and fractal point process measures for both global (e.g. the Morisita Index, the Box-counting fractal method, the multifractal formalism and the Ripley's K-function) and local (e.g. Scan Statistics) analysis. Many measures describing the space-time distribution of environmental phenomena have been proposed in a wide variety of disciplines; nevertheless, most of these measures are of global character and do not consider complex spatial constraints, high variability and multivariate nature of the events. Therefore, we proposed an statistical framework that takes into account the complexities of the geographical space, where phenomena take place, by introducing the Validity Domain concept and carrying out clustering analyses in data with different constrained geographical spaces, hence, assessing the relative degree of clustering of the real distribution. Moreover, exclusively to the forest fire case, this research proposes two new methodologies to defining and mapping both the Wildland-Urban Interface (WUI) described as the interaction zone between burnable vegetation and anthropogenic infrastructures, and the prediction of fire ignition susceptibility. In this regard, the main objective of this Thesis was to carry out a basic statistical/- geospatial research with a strong application part to analyse and to describe complex phenomena as well as to overcome unsolved methodological problems in the characterisation of space-time patterns, in particular, the forest fire occurrences. Thus, this Thesis provides a response to the increasing demand for both environmental monitoring and management tools for the assessment of natural and anthropogenic hazards and risks, sustainable development, retrospective success analysis, etc. The major contributions of this work were presented at national and international conferences and published in 5 scientific journals. National and international collaborations were also established and successfully accomplished. -- Cette thèse développe une méthodologie statistique complète et flexible pour l'analyse et la détection des structures spatiales, temporelles et spatio-temporelles de données environnementales représentées comme de semis de points. Les méthodes ici développées ont été appliquées aux jeux de données simulées autant qu'A des phénomènes environnementaux réels; nonobstant, seulement le cas des feux forestiers dans le Canton du Tessin (la Suisse) et celui de Portugal sont expliqués dans ce document. Normalement, les phénomènes environnementaux peuvent être modélisés comme des processus ponctuels stochastiques ou chaque événement, par ex. les point d'ignition des feux forestiers, est déterminé par son emplacement spatial et son occurrence dans le temps. De plus, des informations tels que la surface bru^lée, les causes d'ignition, l'utilisation du sol, les caractéristiques topographiques, climatiques et météorologiques, etc., peuvent aussi être utilisées pour caractériser le phénomène étudié. Par conséquent, la définition de la structure spatio-temporelle représente un outil puissant pour compren- dre la distribution du phénomène et sa corrélation avec des processus sous-jacents tels que les facteurs socio-économiques, environnementaux et météorologiques. De ce fait, nous proposons une méthodologie basée sur l'adaptation et l'application de mesures statistiques et fractales des processus ponctuels d'analyse global (par ex. l'indice de Morisita, la dimension fractale par comptage de boîtes, le formalisme multifractal et la fonction K de Ripley) et local (par ex. la statistique de scan). Des nombreuses mesures décrivant les structures spatio-temporelles de phénomènes environnementaux peuvent être trouvées dans la littérature. Néanmoins, la plupart de ces mesures sont de caractère global et ne considèrent pas de contraintes spatiales com- plexes, ainsi que la haute variabilité et la nature multivariée des événements. A cet effet, la méthodologie ici proposée prend en compte les complexités de l'espace géographique ou le phénomène a lieu, à travers de l'introduction du concept de Domaine de Validité et l'application des mesures d'analyse spatiale dans des données en présentant différentes contraintes géographiques. Cela permet l'évaluation du degré relatif d'agrégation spatiale/temporelle des structures du phénomène observé. En plus, exclusif au cas de feux forestiers, cette recherche propose aussi deux nouvelles méthodologies pour la définition et la cartographie des zones périurbaines, décrites comme des espaces anthropogéniques à proximité de la végétation sauvage ou de la forêt, et de la prédiction de la susceptibilité à l'ignition de feu. A cet égard, l'objectif principal de cette Thèse a été d'effectuer une recherche statistique/géospatiale avec une forte application dans des cas réels, pour analyser et décrire des phénomènes environnementaux complexes aussi bien que surmonter des problèmes méthodologiques non résolus relatifs à la caractérisation des structures spatio-temporelles, particulièrement, celles des occurrences de feux forestières. Ainsi, cette Thèse fournit une réponse à la demande croissante de la gestion et du monitoring environnemental pour le déploiement d'outils d'évaluation des risques et des dangers naturels et anthro- pogéniques. Les majeures contributions de ce travail ont été présentées aux conférences nationales et internationales, et ont été aussi publiées dans 5 revues internationales avec comité de lecture. Des collaborations nationales et internationales ont été aussi établies et accomplies avec succès.
Resumo:
It is often assumed that total head losses in a sand filter are solely due to the filtration media and that there are analytical solutions, such as the Ergun equation, to compute them. However, total head losses are also due to auxiliary elements (inlet and outlet pipes and filter nozzles), which produce undesirable head losses because they increase energy requirements without contributing to the filtration process. In this study, ANSYS Fluent version 6.3, a commercial computational fluid dynamics (CFD) software program, was used to compute head losses in different parts of a sand filter. Six different numerical filter models of varying complexities were used to understand the hydraulic behavior of the several filter elements and their importance in total head losses. The simulation results show that 84.6% of these were caused by the sand bed and 15.4% were due to auxiliary elements (4.4% in the outlet and inlet pipes, and 11.0% in the perforated plate and nozzles). Simulation results with different models show the important role of the nozzles in the hydraulic behavior of the sand filter. The relationship between the passing area through the nozzles and the passing area through the perforated plate is an important design parameter for the reduction of total head losses. A reduced relationship caused by nozzle clogging would disproportionately increase the total head losses in the sand filter
Resumo:
In this thesis the X-ray tomography is discussed from the Bayesian statistical viewpoint. The unknown parameters are assumed random variables and as opposite to traditional methods the solution is obtained as a large sample of the distribution of all possible solutions. As an introduction to tomography an inversion formula for Radon transform is presented on a plane. The vastly used filtered backprojection algorithm is derived. The traditional regularization methods are presented sufficiently to ground the Bayesian approach. The measurements are foton counts at the detector pixels. Thus the assumption of a Poisson distributed measurement error is justified. Often the error is assumed Gaussian, altough the electronic noise caused by the measurement device can change the error structure. The assumption of Gaussian measurement error is discussed. In the thesis the use of different prior distributions in X-ray tomography is discussed. Especially in severely ill-posed problems the use of a suitable prior is the main part of the whole solution process. In the empirical part the presented prior distributions are tested using simulated measurements. The effect of different prior distributions produce are shown in the empirical part of the thesis. The use of prior is shown obligatory in case of severely ill-posed problem.
Resumo:
This paper presents empirical research comparing the accounting difficulties that arise from the use of two valuation methods for biological assets, fair value (FV) and historical cost (HC) accounting, in the agricultural sector. It also compares how reliable each valuation method is in the decision-making process of agents within the sector. By conducting an experiment with students, farmers, and accountants operating in the agricultural sector, we find that they have more difficulties, make larger miscalculations and make poorer judgements with HC accounting than with FV accounting. In-depth interviews uncover flawed accounting practices in the agricultural sector in Spain in order to meet HC accounting requirements. Given the complexities of cost calculation for biological assets and the predominance of small family business units in advanced Western countries, the study concludes that accounting can be more easily applied in the agricultural sector under FV than HC accounting, and that HC conveys a less accurate grasp of the real situation of a farm.
Resumo:
RX J1826.2-1450/LS 5039 has been recently proposed to be a radio emitting high mass X-ray binary. In this paper, we present an analysis of its X-ray timing and spectroscopic properties using different instruments on board the RXTE satellite. The timing analysis indicates the absence of pulsed or periodic emission on time scales of 0.02-2000 s and 2-200 d, respectively. The source spectrum is well represented by a power-law model, plus a Gaussian component describing a strong iron line at 6.6 keV. Significant emission is seen up to 30 keV, and no exponential cut-off at high energy is required. We also study the radio properties of the system according to the GBI-NASA Monitoring Program. RX J1826.2-1450/LS 5039 continues to display moderate radio variability with a clearly non-thermal spectral index. No strong radio outbursts have been detected after several months.
Resumo:
In this work, we use the rule of mixtures to develop an equivalent material model in which the total strain energy density is split into the isotropic part related to the matrix component and the anisotropic energy contribution related to the fiber effects. For the isotropic energy part, we select the amended non-Gaussian strain energy density model, while the energy fiber effects are added by considering the equivalent anisotropic volumetric fraction contribution, as well as the isotropized representation form of the eight-chain energy model that accounts for the material anisotropic effects. Furthermore, our proposed material model uses a phenomenological non-monotonous softening function that predicts stress softening effects and has an energy term, derived from the pseudo-elasticity theory, that accounts for residual strain deformations. The model’s theoretical predictions are compared with experimental data collected from human vaginal tissues, mice skin, poly(glycolide-co-caprolactone) (PGC25 3-0) and polypropylene suture materials and tracheal and brain human tissues. In all cases examined here, our equivalent material model closely follows stress-softening and residual strain effects exhibited by experimental data
Resumo:
We generalize to arbitrary waiting-time distributions some results which were previously derived for discrete distributions. We show that for any two waiting-time distributions with the same mean delay time, that with higher dispersion will lead to a faster front. Experimental data on the speed of virus infections in a plaque are correctly explained by the theoretical predictions using a Gaussian delay-time distribution, which is more realistic for this system than the Dirac delta distribution considered previously [J. Fort and V. Méndez, Phys. Rev. Lett.89, 178101 (2002)]
Resumo:
The ongoing development of the digital media has brought a new set of challenges with it. As images containing more than three wavelength bands, often called spectral images, are becoming a more integral part of everyday life, problems in the quality of the RGB reproduction from the spectral images have turned into an important area of research. The notion of image quality is often thought to comprise two distinctive areas – image quality itself and image fidelity, both dealing with similar questions, image quality being the degree of excellence of the image, and image fidelity the measure of the match of the image under study to the original. In this thesis, both image fidelity and image quality are considered, with an emphasis on the influence of color and spectral image features on both. There are very few works dedicated to the quality and fidelity of spectral images. Several novel image fidelity measures were developed in this study, which include kernel similarity measures and 3D-SSIM (structural similarity index). The kernel measures incorporate the polynomial, Gaussian radial basis function (RBF) and sigmoid kernels. The 3D-SSIM is an extension of a traditional gray-scale SSIM measure developed to incorporate spectral data. The novel image quality model presented in this study is based on the assumption that the statistical parameters of the spectra of an image influence the overall appearance. The spectral image quality model comprises three parameters of quality: colorfulness, vividness and naturalness. The quality prediction is done by modeling the preference function expressed in JNDs (just noticeable difference). Both image fidelity measures and the image quality model have proven to be effective in the respective experiments.
Resumo:
The first computational implementation that automates the procedures involved in the calculation of infrared intensities using the charge-charge flux-dipole flux model is presented. The atomic charges and dipoles from the Quantum Theory of Atoms in Molecules (QTAIM) model was programmed for Morphy98, Gaussian98 and Gaussian03 programs outputs, but for the ChelpG parameters only the Gaussian programs are supported. Results of illustrative but new calculations for the water, ammonia and methane molecules at the MP2/6-311++G(3d,3p) theoretical level, using the ChelpG and QTAIM/Morphy charges and dipoles are presented. These results showed excellent agreement with analytical results obtained directly at the MP2/6-311++G(3d,3p) level of theory.
Resumo:
A comparative study based on potential energy surfaces (PES) of 2-butanedioic and hypothetic 2-butanedioic/HCl acids is useful for understanding the maleic acid isomerization. The PES enables locating conformers of minimum energy, intermediates of reactions and transition states. From contour diagrams, a set of possible reaction paths are depicted interconnecting the proposed structures. The study was carried out in absentia and in the presence of the catalyst (HCl), using an solvatation model provided by the Gaussian software package. Clearly, the effect of HCl is given by new reaction paths with lower energetic barriers in relation to the reaction without catalyzing.