998 resultados para Correlation (Statistics)


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Correlation energies for all isoelectronic sequences of 2 to 20 electrons and Z = 2 to 25 are obtained by taking differences between theoretical total energies of Dirac-Fock calculations and experimental total energies. These are pure relativistic correlation energies because relativistic and QED effects are already taken care of. The theoretical as well as the experimental values are analysed critically in order to get values as accurate as possible. The correlation energies obtained show an essentially consistent behaviour from Z = 2 to 17. For Z > 17 inconsistencies occur indicating errors in the experimental values which become very large for Z > 25.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quasi-molecular X-rays observed in heavy ion collisions are interpreted within a relativistic calculation of correlation diagrams using the Dirac-Slater model. A semiquantitative description of noncharacteristic M X rays is given for the system Au-I.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Der Europäische Markt für ökologische Lebensmittel ist seit den 1990er Jahren stark gewachsen. Begünstigt wurde dies durch die Einführung der EU-Richtlinie 2092/91 zur Zertifizierung ökologischer Produkte und durch die Zahlung von Subventionen an umstellungswillige Landwirte. Diese Maßnahmen führten am Ende der 1990er Jahre für einige ökologische Produkte zu einem Überangebot auf europäischer Ebene. Die Verbrauchernachfrage stieg nicht in gleichem Maße wie das Angebot, und die Notwendigkeit für eine Verbesserung des Marktgleichgewichts wurde offensichtlich. Dieser Bedarf wurde im Jahr 2004 von der Europäischen Kommission im ersten „Europäischen Aktionsplan für ökologisch erzeugte Lebensmittel und den ökologischen Landbau“ formuliert. Als Voraussetzung für ein gleichmäßigeres Marktwachstum wird in diesem Aktionsplan die Schaffung eines transparenteren Marktes durch die Erhebung statistischer Daten über Produktion und Verbrauch ökologischer Produkte gefordert. Die Umsetzung dieses Aktionsplans ist jedoch bislang nicht befriedigend, da es auf EU-Ebene noch immer keine einheitliche Datenerfassung für den Öko-Sektor gibt. Ziel dieser Studie ist es, angemessene Methoden für die Erhebung, Verarbeitung und Analyse von Öko-Marktdaten zu finden. Geeignete Datenquellen werden identifiziert und es wird untersucht, wie die erhobenen Daten auf Plausibilität untersucht werden können. Hierzu wird ein umfangreicher Datensatz zum Öko-Markt analysiert, der im Rahmen des EU-Forschungsprojektes „Organic Marketing Initiatives and Rural Development” (OMIaRD) erhoben wurde und alle EU-15-Länder sowie Tschechien, Slowenien, Norwegen und die Schweiz abdeckt. Daten für folgende Öko-Produktgruppen werden untersucht: Getreide, Kartoffeln, Gemüse, Obst, Milch, Rindfleisch, Schaf- und Ziegenfleisch, Schweinefleisch, Geflügelfleisch und Eier. Ein zentraler Ansatz dieser Studie ist das Aufstellen von Öko-Versorgungsbilanzen, die einen zusammenfassenden Überblick von Angebot und Nachfrage der jeweiligen Produktgruppen liefern. Folgende Schlüsselvariablen werden untersucht: Öko-Produktion, Öko-Verkäufe, Öko-Verbrauch, Öko-Außenhandel, Öko-Erzeugerpreise und Öko-Verbraucherpreise. Zudem werden die Öko-Marktdaten in Relation zu den entsprechenden Zahlen für den Gesamtmarkt (öko plus konventionell) gesetzt, um die Bedeutung des Öko-Sektors auf Produkt- und Länderebene beurteilen zu können. Für die Datenerhebung werden Primär- und Sekundärforschung eingesetzt. Als Sekundärquellen werden Publikationen von Marktforschungsinstituten, Öko-Erzeugerverbänden und wissenschaftlichen Instituten ausgewertet. Empirische Daten zum Öko-Markt werden im Rahmen von umfangreichen Interviews mit Marktexperten in allen beteiligten Ländern erhoben. Die Daten werden mit Korrelations- und Regressionsanalysen untersucht, und es werden Hypothesen über vermutete Zusammenhänge zwischen Schlüsselvariablen des Öko-Marktes getestet. Die Datenbasis dieser Studie bezieht sich auf ein einzelnes Jahr und stellt damit einen Schnappschuss der Öko-Marktsituation der EU dar. Um die Marktakteure in die Lage zu versetzen, zukünftige Markttrends voraussagen zu können, wird der Aufbau eines EU-weiten Öko-Marktdaten-Erfassungssystems gefordert. Hierzu wird eine harmonisierte Datenerfassung in allen EU-Ländern gemäß einheitlicher Standards benötigt. Die Zusammenstellung der Marktdaten für den Öko-Sektor sollte kompatibel sein mit den Methoden und Variablen der bereits existierenden Eurostat-Datenbank für den gesamten Agrarmarkt (öko plus konventionell). Eine jährlich aktualisierte Öko-Markt-Datenbank würde die Transparenz des Öko-Marktes erhöhen und die zukünftige Entwicklung des Öko-Sektors erleichtern. ---------------------------

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The interaction of short intense laser pulses with atoms/molecules produces a multitude of highly nonlinear processes requiring a non-perturbative treatment. Detailed study of these highly nonlinear processes by numerically solving the time-dependent Schrodinger equation becomes a daunting task when the number of degrees of freedom is large. Also the coupling between the electronic and nuclear degrees of freedom further aggravates the computational problems. In the present work we show that the time-dependent Hartree (TDH) approximation, which neglects the correlation effects, gives unreliable description of the system dynamics both in the absence and presence of an external field. A theoretical framework is required that treats the electrons and nuclei on equal footing and fully quantum mechanically. To address this issue we discuss two approaches, namely the multicomponent density functional theory (MCDFT) and the multiconfiguration time-dependent Hartree (MCTDH) method, that go beyond the TDH approximation and describe the correlated electron-nuclear dynamics accurately. In the MCDFT framework, where the time-dependent electronic and nuclear densities are the basic variables, we discuss an algorithm to calculate the exact Kohn-Sham (KS) potentials for small model systems. By simulating the photodissociation process in a model hydrogen molecular ion, we show that the exact KS potentials contain all the many-body effects and give an insight into the system dynamics. In the MCTDH approach, the wave function is expanded as a sum of products of single-particle functions (SPFs). The MCTDH method is able to describe the electron-nuclear correlation effects as the SPFs and the expansion coefficients evolve in time and give an accurate description of the system dynamics. We show that the MCTDH method is suitable to study a variety of processes such as the fragmentation of molecules, high-order harmonic generation, the two-center interference effect, and the lochfrass effect. We discuss these phenomena in a model hydrogen molecular ion and a model hydrogen molecule. Inclusion of absorbing boundaries in the mean-field approximation and its consequences are discussed using the model hydrogen molecular ion. To this end, two types of calculations are considered: (i) a variational approach with a complex absorbing potential included in the full many-particle Hamiltonian and (ii) an approach in the spirit of time-dependent density functional theory (TDDFT), including complex absorbing potentials in the single-particle equations. It is elucidated that for small grids the TDDFT approach is superior to the variational approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study describes a combined empirical/modeling approach to assess the possible impact of climate variability on rice production in the Philippines. We collated climate data of the last two decades (1985-2002) as well as yield statistics of six provinces of the Philippines, selected along a North-South gradient. Data from the climate information system of NASA were used as input parameters of the model ORYZA2000 to determine potential yields and, in the next steps, the yield gaps defined as the difference between potential and actual yields. Both simulated and actual yields of irrigated rice varied strongly between years. However, no climate-driven trends were apparent and the variability in actual yields showed no correlation with climatic parameters. The observed variation in simulated yields was attributable to seasonal variations in climate (dry/wet season) and to climatic differences between provinces and agro-ecological zones. The actual yield variation between provinces was not related to differences in the climatic yield potential but rather to soil and management factors. The resulting yield gap was largest in remote and infrastructurally disfavored provinces (low external input use) with a high production potential (high solar radiation and day-night temperature differences). In turn, the yield gap was lowest in central provinces with good market access but with a relatively low climatic yield potential. We conclude that neither long-term trends nor the variability of the climate can explain current rice yield trends and that agroecological, seasonal, and management effects are over-riding any possible climatic variations. On the other hand the lack of a climate-driven trend in the present situation may be superseded by ongoing climate change in the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Web services from different partners can be combined to applications that realize a more complex business goal. Such applications built as Web service compositions define how interactions between Web services take place in order to implement the business logic. Web service compositions not only have to provide the desired functionality but also have to comply with certain Quality of Service (QoS) levels. Maximizing the users' satisfaction, also reflected as Quality of Experience (QoE), is a primary goal to be achieved in a Service-Oriented Architecture (SOA). Unfortunately, in a dynamic environment like SOA unforeseen situations might appear like services not being available or not responding in the desired time frame. In such situations, appropriate actions need to be triggered in order to avoid the violation of QoS and QoE constraints. In this thesis, proper solutions are developed to manage Web services and Web service compositions with regard to QoS and QoE requirements. The Business Process Rules Language (BPRules) was developed to manage Web service compositions when undesired QoS or QoE values are detected. BPRules provides a rich set of management actions that may be triggered for controlling the service composition and for improving its quality behavior. Regarding the quality properties, BPRules allows to distinguish between the QoS values as they are promised by the service providers, QoE values that were assigned by end-users, the monitored QoS as measured by our BPR framework, and the predicted QoS and QoE values. BPRules facilitates the specification of certain user groups characterized by different context properties and allows triggering a personalized, context-aware service selection tailored for the specified user groups. In a service market where a multitude of services with the same functionality and different quality values are available, the right services need to be selected for realizing the service composition. We developed new and efficient heuristic algorithms that are applied to choose high quality services for the composition. BPRules offers the possibility to integrate multiple service selection algorithms. The selection algorithms are applicable also for non-linear objective functions and constraints. The BPR framework includes new approaches for context-aware service selection and quality property predictions. We consider the location information of users and services as context dimension for the prediction of response time and throughput. The BPR framework combines all new features and contributions to a comprehensive management solution. Furthermore, it facilitates flexible monitoring of QoS properties without having to modify the description of the service composition. We show how the different modules of the BPR framework work together in order to execute the management rules. We evaluate how our selection algorithms outperform a genetic algorithm from related research. The evaluation reveals how context data can be used for a personalized prediction of response time and throughput.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Humans distinguish materials such as metal, plastic, and paper effortlessly at a glance. Traditional computer vision systems cannot solve this problem at all. Recognizing surface reflectance properties from a single photograph is difficult because the observed image depends heavily on the amount of light incident from every direction. A mirrored sphere, for example, produces a different image in every environment. To make matters worse, two surfaces with different reflectance properties could produce identical images. The mirrored sphere simply reflects its surroundings, so in the right artificial setting, it could mimic the appearance of a matte ping-pong ball. Yet, humans possess an intuitive sense of what materials typically "look like" in the real world. This thesis develops computational algorithms with a similar ability to recognize reflectance properties from photographs under unknown, real-world illumination conditions. Real-world illumination is complex, with light typically incident on a surface from every direction. We find, however, that real-world illumination patterns are not arbitrary. They exhibit highly predictable spatial structure, which we describe largely in the wavelet domain. Although they differ in several respects from the typical photographs, illumination patterns share much of the regularity described in the natural image statistics literature. These properties of real-world illumination lead to predictable image statistics for a surface with given reflectance properties. We construct a system that classifies a surface according to its reflectance from a single photograph under unknown illuminination. Our algorithm learns relationships between surface reflectance and certain statistics computed from the observed image. Like the human visual system, we solve the otherwise underconstrained inverse problem of reflectance estimation by taking advantage of the statistical regularity of illumination. For surfaces with homogeneous reflectance properties and known geometry, our system rivals human performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is general consensus that context can be a rich source of information about an object's identity, location and scale. In fact, the structure of many real-world scenes is governed by strong configurational rules akin to those that apply to a single object. Here we introduce a simple probabilistic framework for modeling the relationship between context and object properties based on the correlation between the statistics of low-level features across the entire scene and the objects that it contains. The resulting scheme serves as an effective procedure for object priming, context driven focus of attention and automatic scale-selection on real-world scenes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a new paradigm for signal reconstruction and superresolution, Correlation Kernel Analysis (CKA), that is based on the selection of a sparse set of bases from a large dictionary of class- specific basis functions. The basis functions that we use are the correlation functions of the class of signals we are analyzing. To choose the appropriate features from this large dictionary, we use Support Vector Machine (SVM) regression and compare this to traditional Principal Component Analysis (PCA) for the tasks of signal reconstruction, superresolution, and compression. The testbed we use in this paper is a set of images of pedestrians. This paper also presents results of experiments in which we use a dictionary of multiscale basis functions and then use Basis Pursuit De-Noising to obtain a sparse, multiscale approximation of a signal. The results are analyzed and we conclude that 1) when used with a sparse representation technique, the correlation function is an effective kernel for image reconstruction and superresolution, 2) for image compression, PCA and SVM have different tradeoffs, depending on the particular metric that is used to evaluate the results, 3) in sparse representation techniques, L_1 is not a good proxy for the true measure of sparsity, L_0, and 4) the L_epsilon norm may be a better error metric for image reconstruction and compression than the L_2 norm, though the exact psychophysical metric should take into account high order structure in images.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Aitchison vector space structure for the simplex is generalized to a Hilbert space structure A2(P) for distributions and likelihoods on arbitrary spaces. Central notations of statistics, such as Information or Likelihood, can be identified in the algebraical structure of A2(P) and their corresponding notions in compositional data analysis, such as Aitchison distance or centered log ratio transform. In this way very elaborated aspects of mathematical statistics can be understood easily in the light of a simple vector space structure and of compositional data analysis. E.g. combination of statistical information such as Bayesian updating, combination of likelihood and robust M-estimation functions are simple additions/ perturbations in A2(Pprior). Weighting observations corresponds to a weighted addition of the corresponding evidence. Likelihood based statistics for general exponential families turns out to have a particularly easy interpretation in terms of A2(P). Regular exponential families form finite dimensional linear subspaces of A2(P) and they correspond to finite dimensional subspaces formed by their posterior in the dual information space A2(Pprior). The Aitchison norm can identified with mean Fisher information. The closing constant itself is identified with a generalization of the cummulant function and shown to be Kullback Leiblers directed information. Fisher information is the local geometry of the manifold induced by the A2(P) derivative of the Kullback Leibler information and the space A2(P) can therefore be seen as the tangential geometry of statistical inference at the distribution P. The discussion of A2(P) valued random variables, such as estimation functions or likelihoods, give a further interpretation of Fisher information as the expected squared norm of evidence and a scale free understanding of unbiased reasoning

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study four measures of problem instance behavior that might account for the observed differences in interior-point method (IPM) iterations when these methods are used to solve semidefinite programming (SDP) problem instances: (i) an aggregate geometry measure related to the primal and dual feasible regions (aspect ratios) and norms of the optimal solutions, (ii) the (Renegar-) condition measure C(d) of the data instance, (iii) a measure of the near-absence of strict complementarity of the optimal solution, and (iv) the level of degeneracy of the optimal solution. We compute these measures for the SDPLIB suite problem instances and measure the correlation between these measures and IPM iteration counts (solved using the software SDPT3) when the measures have finite values. Our conclusions are roughly as follows: the aggregate geometry measure is highly correlated with IPM iterations (CORR = 0.896), and is a very good predictor of IPM iterations, particularly for problem instances with solutions of small norm and aspect ratio. The condition measure C(d) is also correlated with IPM iterations, but less so than the aggregate geometry measure (CORR = 0.630). The near-absence of strict complementarity is weakly correlated with IPM iterations (CORR = 0.423). The level of degeneracy of the optimal solution is essentially uncorrelated with IPM iterations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

El objetivo del estudio consistió en validar la escala de calidad de vida MELASQOL al idioma español en un grupo de mujeres colombianas con melasma. La muestra de estudio correspondió a 80 pacientes, las cuales asistieron a consulta de medicina estética de la IPS Quinta de Mutis de la Universidad del Rosario, por presentar melasma, se les pidió su participación y datos de ubicación telefónica para aplicarles la escala traducida al idioma español (MELASQOL-t) mediante la guía QoLMs (guidelines for cross-cultural adaptation of health-related), para validación de escalas propuesta por la OMS, en un segundo contacto vía telefónica. Una vez obtenida la respuesta telefónica a la escala de todas las pacientes, se procesaron los datos para validación mediante análisis estadístico; bajo los parámetros de congruencia interna, validez de criterio y reproducibilidad. Solo con una pequeña parte (n=10) de la muestra se efectuó un tercer contacto telefónico con fines estadísticos. El promedio de edad del grupo estudiado correspondió a 40±12 años, el estrato socioeconómico de mayor frecuencia fue el tres (3), con mayor distribución en el nivel educativo de estudios universitarios completos, se obtuvieron para los criterios de validación los siguientes datos: congruencia interna con a-cronbach=0.88, validez con rs=0.70-p<0.001 y reproducibilidad con coeficiente de correlación intraclase =0.959 (IC 95%: 0.986, p<0.001). La conclusión principal fue que la escala MELASQOL traducida al idioma español se validó bajo criterios adecuados, obteniendo buenos índices para medir calidad de vida en el grupo de mujeres colombianas con melasma estudiadas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introducción: la enfermedad hepática grasa no alcohólica (NAFLD) es una enfermedad muy frecuente y de curso insidioso. El diagnostico, seguimiento y tratamiento de esta condición permanecen aun sin consenso debido principalmente a la falta de conocimiento de su historia natural y la dificultad de un diagnostico preciso de forma no invasiva. Materiales y Métodos: estudio prospectivo, observacional de corte transversal y correlación usando un muestreo no aleatorio de los pacientes que asistieron al servicio de chequeo médico de la Fundación CardioInfantil – Instituto de Cardiología. Se evaluaron variables clínicas y para-clínicas como Índice de Masa Corporal, transaminasas, triglicéridos y apariencia ultrasonográfica del hígado. Se realizo análisis no paramétrico de varianza con la prueba de Kruskal-Wallis y análisis de correlación por medio del índice de correlación de Spearman. Resultados: se incluyeron 619 pacientes. Se encontró una variación estadísticamente significativa (p<0,001) entre todas las variables analizadas agrupadas de acuerdo a la apariencia ultrasonográfica del hígado. Finalmente, se encontraron coeficientes de correlación positivos y estadísticamente significativos (p<0,001) para las mismas variables. Discusión: la evaluación por ultrasonografía del hígado es una opción atractiva para el diagnostico y seguimiento de los pacientes con NAFLD debido a sus características no invasivas, bajo costo y amplia disponibilidad. Los resultados obtenidos sugieren que dada la variación de los parámetros clínicos de acuerdo con la apariencia hepática, esta herramienta puede ser útil tanto en fase de diagnostico como en fase de seguimiento para los pacientes de esta población. Los coeficientes de correlación sugieren que la posibilidad de predecir variables sanguíneas usando este método que debería estudiarse más a fondo. Conclusiones: en conjunto, los resultados de este estudio soportan la utilidad de la evaluación ultrasonográfica del hígado como herramienta de evaluación y posible seguimiento en pacientes con sospecha de NAFLD en esta población.