999 resultados para Classical Data
Resumo:
Aquesta tesi estudia com estimar la distribució de les variables regionalitzades l'espai mostral i l'escala de les quals admeten una estructura d'espai Euclidià. Apliquem el principi del treball en coordenades: triem una base ortonormal, fem estadística sobre les coordenades de les dades, i apliquem els output a la base per tal de recuperar un resultat en el mateix espai original. Aplicant-ho a les variables regionalitzades, obtenim una aproximació única consistent, que generalitza les conegudes propietats de les tècniques de kriging a diversos espais mostrals: dades reals, positives o composicionals (vectors de components positives amb suma constant) són tractades com casos particulars. D'aquesta manera, es generalitza la geostadística lineal, i s'ofereix solucions a coneguts problemes de la no-lineal, tot adaptant la mesura i els criteris de representativitat (i.e., mitjanes) a les dades tractades. L'estimador per a dades positives coincideix amb una mitjana geomètrica ponderada, equivalent a l'estimació de la mediana, sense cap dels problemes del clàssic kriging lognormal. El cas composicional ofereix solucions equivalents, però a més permet estimar vectors de probabilitat multinomial. Amb una aproximació bayesiana preliminar, el kriging de composicions esdevé també una alternativa consistent al kriging indicador. Aquesta tècnica s'empra per estimar funcions de probabilitat de variables qualsevol, malgrat que sovint ofereix estimacions negatives, cosa que s'evita amb l'alternativa proposada. La utilitat d'aquest conjunt de tècniques es comprova estudiant la contaminació per amoníac a una estació de control automàtic de la qualitat de l'aigua de la conca de la Tordera, i es conclou que només fent servir les tècniques proposades hom pot detectar en quins instants l'amoni es transforma en amoníac en una concentració superior a la legalment permesa.
Resumo:
It is generally assumed that the variability of neuronal morphology has an important effect on both the connectivity and the activity of the nervous system, but this effect has not been thoroughly investigated. Neuroanatomical archives represent a crucial tool to explore structure–function relationships in the brain. We are developing computational tools to describe, generate, store and render large sets of three–dimensional neuronal structures in a format that is compact, quantitative, accurate and readily accessible to the neuroscientist. Single–cell neuroanatomy can be characterized quantitatively at several levels. In computer–aided neuronal tracing files, a dendritic tree is described as a series of cylinders, each represented by diameter, spatial coordinates and the connectivity to other cylinders in the tree. This ‘Cartesian’ description constitutes a completely accurate mapping of dendritic morphology but it bears little intuitive information for the neuroscientist. In contrast, a classical neuroanatomical analysis characterizes neuronal dendrites on the basis of the statistical distributions of morphological parameters, e.g. maximum branching order or bifurcation asymmetry. This description is intuitively more accessible, but it only yields information on the collective anatomy of a group of dendrites, i.e. it is not complete enough to provide a precise ‘blueprint’ of the original data. We are adopting a third, intermediate level of description, which consists of the algorithmic generation of neuronal structures within a certain morphological class based on a set of ‘fundamental’, measured parameters. This description is as intuitive as a classical neuroanatomical analysis (parameters have an intuitive interpretation), and as complete as a Cartesian file (the algorithms generate and display complete neurons). The advantages of the algorithmic description of neuronal structure are immense. If an algorithm can measure the values of a handful of parameters from an experimental database and generate virtual neurons whose anatomy is statistically indistinguishable from that of their real counterparts, a great deal of data compression and amplification can be achieved. Data compression results from the quantitative and complete description of thousands of neurons with a handful of statistical distributions of parameters. Data amplification is possible because, from a set of experimental neurons, many more virtual analogues can be generated. This approach could allow one, in principle, to create and store a neuroanatomical database containing data for an entire human brain in a personal computer. We are using two programs, L–NEURON and ARBORVITAE, to investigate systematically the potential of several different algorithms for the generation of virtual neurons. Using these programs, we have generated anatomically plausible virtual neurons for several morphological classes, including guinea pig cerebellar Purkinje cells and cat spinal cord motor neurons. These virtual neurons are stored in an online electronic archive of dendritic morphology. This process highlights the potential and the limitations of the ‘computational neuroanatomy’ strategy for neuroscience databases.
Resumo:
By eliminating the short range negative divergence of the Debye–Hückel pair distribution function, but retaining the exponential charge screening known to operate at large interparticle separation, the thermodynamic properties of one-component plasmas of point ions or charged hard spheres can be well represented even in the strong coupling regime. Predicted electrostatic free energies agree within 5% of simulation data for typical Coulomb interactions up to a factor of 10 times the average kinetic energy. Here, this idea is extended to the general case of a uniform ionic mixture, comprising an arbitrary number of components, embedded in a rigid neutralizing background. The new theory is implemented in two ways: (i) by an unambiguous iterative algorithm that requires numerical methods and breaks the symmetry of cross correlation functions; and (ii) by invoking generalized matrix inverses that maintain symmetry and yield completely analytic solutions, but which are not uniquely determined. The extreme computational simplicity of the theory is attractive when considering applications to complex inhomogeneous fluids of charged particles.
Resumo:
Data assimilation algorithms are a crucial part of operational systems in numerical weather prediction, hydrology and climate science, but are also important for dynamical reconstruction in medical applications and quality control for manufacturing processes. Usually, a variety of diverse measurement data are employed to determine the state of the atmosphere or to a wider system including land and oceans. Modern data assimilation systems use more and more remote sensing data, in particular radiances measured by satellites, radar data and integrated water vapor measurements via GPS/GNSS signals. The inversion of some of these measurements are ill-posed in the classical sense, i.e. the inverse of the operator H which maps the state onto the data is unbounded. In this case, the use of such data can lead to significant instabilities of data assimilation algorithms. The goal of this work is to provide a rigorous mathematical analysis of the instability of well-known data assimilation methods. Here, we will restrict our attention to particular linear systems, in which the instability can be explicitly analyzed. We investigate the three-dimensional variational assimilation and four-dimensional variational assimilation. A theory for the instability is developed using the classical theory of ill-posed problems in a Banach space framework. Further, we demonstrate by numerical examples that instabilities can and will occur, including an example from dynamic magnetic tomography.
Resumo:
It is well known that there is a dynamic relationship between cerebral blood flow (CBF) and cerebral blood volume (CBV). With increasing applications of functional MRI, where the blood oxygen-level-dependent signals are recorded, the understanding and accurate modeling of the hemodynamic relationship between CBF and CBV becomes increasingly important. This study presents an empirical and data-based modeling framework for model identification from CBF and CBV experimental data. It is shown that the relationship between the changes in CBF and CBV can be described using a parsimonious autoregressive with exogenous input model structure. It is observed that neither the ordinary least-squares (LS) method nor the classical total least-squares (TLS) method can produce accurate estimates from the original noisy CBF and CBV data. A regularized total least-squares (RTLS) method is thus introduced and extended to solve such an error-in-the-variables problem. Quantitative results show that the RTLS method works very well on the noisy CBF and CBV data. Finally, a combination of RTLS with a filtering method can lead to a parsimonious but very effective model that can characterize the relationship between the changes in CBF and CBV.
Resumo:
Tests of the new Rossby wave theories that have been developed over the past decade to account for discrepancies between theoretical wave speeds and those observed by satellite altimeters have focused primarily on the surface signature of such waves. It appears, however, that the surface signature of the waves acts only as a rather weak constraint, and that information on the vertical structure of the waves is required to better discriminate between competing theories. Due to the lack of 3-D observations, this paper uses high-resolution model data to construct realistic vertical structures of Rossby waves and compares these to structures predicted by theory. The meridional velocity of a section at 24° S in the Atlantic Ocean is pre-processed using the Radon transform to select the dominant westward signal. Normalized profiles are then constructed using three complementary methods based respectively on: (1) averaging vertical profiles of velocity, (2) diagnosing the amplitude of the Radon transform of the westward propagating signal at different depths, and (3) EOF analysis. These profiles are compared to profiles calculated using four different Rossby wave theories: standard linear theory (SLT), SLT plus mean flow, SLT plus topographic effects, and theory including mean flow and topographic effects. Our results support the classical theoretical assumption that westward propagating signals have a well-defined vertical modal structure associated with a phase speed independent of depth, in contrast with the conclusions of a recent study using the same model but for different locations in the North Atlantic. The model structures are in general surface intensified, with a sign reversal at depth in some regions, notably occurring at shallower depths in the East Atlantic. SLT provides a good fit to the model structures in the top 300 m, but grossly overestimates the sign reversal at depth. The addition of mean flow slightly improves the latter issue, but is too surface intensified. SLT plus topography rectifies the overestimation of the sign reversal, but overestimates the amplitude of the structure for much of the layer above the sign reversal. Combining the effects of mean flow and topography provided the best fit for the mean model profiles, although small errors at the surface and mid-depths are carried over from the individual effects of mean flow and topography respectively. Across the section the best fitting theory varies between SLT plus topography and topography with mean flow, with, in general, SLT plus topography performing better in the east where the sign reversal is less pronounced. None of the theories could accurately reproduce the deeper sign reversals in the west. All theories performed badly at the boundaries. The generalization of this method to other latitudes, oceans, models and baroclinic modes would provide greater insight into the variability in the ocean, while better observational data would allow verification of the model findings.
Resumo:
We present a description of the Stem-Gerlach type experiments using only the concepts of classical electrodynamics and the Newton`s equations of motion. The quantization of the projections of the spin (or the projections of the magnetic dipole) is not introduced in our calculations. The main characteristic of our approach is a quantitative analysis of the motion of the magnetic atoms at the entrance of the magnetic field region. This study reveals a mechanism which modifies continuously the orientation of the magnetic dipole of the atom in a very short time interval, at the entrance of the magnetic field region. The mechanism is based on the conservation of the total energy associated with a magnetic dipole which moves in a non uniform magnetic field generated by an electromagnet. A detailed quantitative comparison with the (1922) Stem-Gerlach experiment and the didactical (1967) experiment by J.R. Zacharias is presented. We conclude, contrary to the original Stern-Gerlach statement, that the classical explanations are not ruled out by the experimental data.
Resumo:
The Mario Schenberg gravitational wave detector has started its commissioning phase at the Physics Institute of the University of Sao Paulo. We have collected almost 200 h of data from the instrument in order to check out its behavior and performance. We have also been developing a data acquisition system for it under a VXI System. Such a system is composed of an analog-to-digital converter and a GPS receiver for time synchronization. We have been building the software that controls and sets up the data acquisition. Here we present an overview of the Mario Schenberg detector and its data acquisition system, some results from the first commissioning run and solutions for some problems we have identified.
Resumo:
We study the ground-state energy of a classical artificial molecule formed by two-dimensional clusters (artificial atoms) of N/2 charged particles separated by a distance d. For the small molecules of N = 2 and 4, we obtain analytical expressions for this energy. For the larger ones, we calculate the ground-state energy using molecular dynamics simulation for N up to 128. From our numerical results, we are able to find out a function to approximate the ground-state energy of the molecules covering the range from atoms to molecules for any inter-atom distance d and for particle number from N = 8 to 128 within a difference less than one percent from the MD data.
Resumo:
Cyclic imides have been widely employed in drug design research due to their multiple pharmacological and biological properties. In the present study, two-dimensional quantitative structure-activity relationship (2D QSAR) studies were conducted on a series of potent analgesic cyclic imides using both classical and hologram QSAR (HQSAR) methods, yielding significant statistical models (classical QSAR, q(2) = 0.80; HQSAR, q(2) = 0.84). The models were then used to evaluate an external data test, and the predicted values were in good agreement with the experimental results, indicating their consistency for untested compounds.
Resumo:
P>In the context of either Bayesian or classical sensitivity analyses of over-parametrized models for incomplete categorical data, it is well known that prior-dependence on posterior inferences of nonidentifiable parameters or that too parsimonious over-parametrized models may lead to erroneous conclusions. Nevertheless, some authors either pay no attention to which parameters are nonidentifiable or do not appropriately account for possible prior-dependence. We review the literature on this topic and consider simple examples to emphasize that in both inferential frameworks, the subjective components can influence results in nontrivial ways, irrespectively of the sample size. Specifically, we show that prior distributions commonly regarded as slightly informative or noninformative may actually be too informative for nonidentifiable parameters, and that the choice of over-parametrized models may drastically impact the results, suggesting that a careful examination of their effects should be considered before drawing conclusions.Resume Que ce soit dans un cadre Bayesien ou classique, il est bien connu que la surparametrisation, dans les modeles pour donnees categorielles incompletes, peut conduire a des conclusions erronees. Cependant, certains auteurs persistent a negliger les problemes lies a la presence de parametres non identifies. Nous passons en revue la litterature dans ce domaine, et considerons quelques exemples surparametres simples dans lesquels les elements subjectifs influencent de facon non negligeable les resultats, independamment de la taille des echantillons. Plus precisement, nous montrons comment des a priori consideres comme peu ou non-informatifs peuvent se reveler extremement informatifs en ce qui concerne les parametres non identifies, et que le recours a des modeles surparametres peut avoir sur les conclusions finales un impact considerable. Ceci suggere un examen tres attentif de l`impact potentiel des a priori.
Resumo:
This dissertation surveys the literature on economic growth. I review a substantial number of articles published by some of the most renowned researchers engaged in the study of economic growth. The literature is so vast that before undertaking new studies it is very important to know what has been done in the field. The dissertation has six chapters. In Chapter 1, I introduce the reader to the topic of economic growth. In Chapter 2, I present the Solow model and other contributions to the exogenous growth theory proposed in the literature. I also briefly discuss the endogenous approach to growth. In Chapter 3, I summarize the variety of econometric problems that affect the cross-country regressions. The factors that contribute to economic growth are highlighted and the validity of the empirical results is discussed. In Chapter 4, the existence of convergence, whether conditional or not, is analyzed. The literature using both cross-sectional and panel data is reviewed. An analysis on the topic of convergence using a quantile-regression framework is also provided. In Chapter 5, the controversial relationship between financial development and economic growth is analyzed. Particularly, I discuss the arguments in favour and against the Schumpeterian view that considers financial development as an important determinant of innovation and economic growth. Chapter 6 concludes the dissertation. Summing up, the literature appears to be not fully conclusive about the main determinants of economic growth, the existence of convergence and the impact of finance on growth.
Resumo:
The increasing use of fossil fuels in line with cities demographic explosion carries out to huge environmental impact in society. For mitigate these social impacts, regulatory requirements have positively influenced the environmental consciousness of society, as well as, the strategic behavior of businesses. Along with this environmental awareness, the regulatory organs have conquered and formulated new laws to control potentially polluting activities, mostly in the gas stations sector. Seeking for increasing market competitiveness, this sector needs to quickly respond to internal and external pressures, adapting to the new standards required in a strategic way to get the Green Badge . Gas stations have incorporated new strategies to attract and retain new customers whom present increasingly social demand. In the social dimension, these projects help the local economy by generating jobs and income distribution. In this survey, the present research aims to align the social, economic and environmental dimensions to set the sustainable performance indicators at Gas Stations sector in the city of Natal/RN. The Sustainable Balanced Scorecard (SBSC) framework was create with a set of indicators for mapping the production process of gas stations. This mapping aimed at identifying operational inefficiencies through multidimensional indicators. To carry out this research, was developed a system for evaluating the sustainability performance with application of Data Envelopment Analysis (DEA) through a quantitative method approach to detect system s efficiency level. In order to understand the systemic complexity, sub organizational processes were analyzed by the technique Network Data Envelopment Analysis (NDEA) figuring their micro activities to identify and diagnose the real causes of overall inefficiency. The sample size comprised 33 Gas stations and the conceptual model included 15 indicators distributed in the three dimensions of sustainability: social, environmental and economic. These three dimensions were measured by means of classical models DEA-CCR input oriented. To unify performance score of individual dimensions, was designed a unique grouping index based upon two means: arithmetic and weighted. After this, another analysis was performed to measure the four perspectives of SBSC: learning and growth, internal processes, customers, and financial, unifying, by averaging the performance scores. NDEA results showed that no company was assessed with excellence in sustainability performance. Some NDEA higher efficiency Gas Stations proved to be inefficient under certain perspectives of SBSC. In the sequence, a comparative sustainable performance and assessment analyzes among the gas station was done, enabling entrepreneurs evaluate their performance in the market competitors. Diagnoses were also obtained to support the decision making of entrepreneurs in improving the management of organizational resources and promote guidelines the regulators. Finally, the average index of sustainable performance was 69.42%, representing the efforts of the environmental suitability of the Gas station. This results point out a significant awareness of this segment, but it still needs further action to enhance sustainability in the long term
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This study compared the color fidelity of different composite resins with their registration in the Vita Classical Shade Guide. Using a prefabricated Teflon mold, 120 specimens were divided into four groups fn - 30), according to the resin tested. Three subgroups (a = 10) were prepared for each resin group; these subgroups tested enamel shade, dentin shade, and enamel and dentin shade. Three measurements were performed to verily whether the tooth shade matched that of the Vita Classical Shade Guide. The color was evaluated and the shade variations were calculated. The data were submitted to a three-way AN OVA test (time, color match, and composite type), followed by Tukey's test. It was concluded that all composite resins showed color differences in relation to the Vita Classical Shade Guide.