886 resultados para Numerical evaluations


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This article analyses and discusses issues that pertain to the choice of relevant databases for assigning values to the components of evaluative likelihood ratio procedures at source level. Although several formal likelihood ratio developments currently exist, both case practitioners and recipients of expert information (such as judiciary) may be reluctant to consider them as a framework for evaluating scientific evidence in context. The recent ruling R v T and ensuing discussions in many forums provide illustrative examples for this. In particular, it is often felt that likelihood ratio-based reasoning amounts to an application that requires extensive quantitative information along with means for dealing with technicalities related to the algebraic formulation of these approaches. With regard to this objection, this article proposes two distinct discussions. In a first part, it is argued that, from a methodological point of view, there are additional levels of qualitative evaluation that are worth considering prior to focusing on particular numerical probability assignments. Analyses will be proposed that intend to show that, under certain assumptions, relative numerical values, as opposed to absolute values, may be sufficient to characterize a likelihood ratio for practical and pragmatic purposes. The feasibility of such qualitative considerations points out that the availability of hard numerical data is not a necessary requirement for implementing a likelihood ratio approach in practice. It is further argued that, even if numerical evaluations can be made, qualitative considerations may be valuable because they can further the understanding of the logical underpinnings of an assessment. In a second part, the article will draw a parallel to R v T by concentrating on a practical footwear mark case received at the authors' institute. This case will serve the purpose of exemplifying the possible usage of data from various sources in casework and help to discuss the difficulty associated with reconciling the depth of theoretical likelihood ratio developments and limitations in the degree to which these developments can actually be applied in practice.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The aim of the present work is to study the occupants' exposure to fine particulate concentrations in ten nightclubs (NCs) in Athens, Greece. Measurements of PM1 and PM 2.5 were made in the outdoor and indoor environment of each NC. The average indoorPM1 andPM 2.5 concentrations were found to be 181.77 μgm−3 and 454.08 μg m−3 respectively, while the corresponding outdoor values were 11.04 μg m−3 and 32.19 μg m−3. Ventilation and resuspension rates were estimated through consecutive numerical experiments with an indoor air quality model and were found to be remarkably lower than the minimum values recommended by national standards. The relative effects of the ventilation and smoking on the occupants' exposures were examined using multiple regression techniques. Itwas found that given the low ventilation rates, the effect of smoking as well as the occupancy is of the highest importance. Numerical evaluations showed that if the ventilation rates were at the minimum values set by national standards, then the indoor exposures would be reduced at the 70% of the present exposure values.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A positive measure ψ defined on [a,b] such that its moments μn=∫a btndψ(t) exist for n=0,±1,±2,⋯, is called a strong positive measure on [a,b]. If 0≤anumerical evaluation of the nodes and weights of such quadrature rules are also considered. © 2010 IMACS. Published by Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Bound-constrained minimization is a subject of active research. To assess the performance of existent solvers, numerical evaluations and comparisons are carried on. Arbitrary decisions that may have a crucial effect on the conclusions of numerical experiments are highlighted in the present work. As a result, a detailed evaluation based on performance profiles is applied to the comparison of bound-constrained minimization solvers. Extensive numerical results are presented and analyzed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Visual analysis of social networks is usually based on graph drawing algorithms and tools. However, social networks are a special kind of graph in the sense that interpretation of displayed relationships is heavily dependent on context. Context, in its turn, is given by attributes associated with graph elements, such as individual nodes, edges, and groups of edges, as well as by the nature of the connections between individuals. In most systems, attributes of individuals and communities are not taken into consideration during graph layout, except to derive weights for force-based placement strategies. This paper proposes a set of novel tools for displaying and exploring social networks based on attribute and connectivity mappings. These properties are employed to layout nodes on the plane via multidimensional projection techniques. For the attribute mapping, we show that node proximity in the layout corresponds to similarity in attribute, leading to easiness in locating similar groups of nodes. The projection based on connectivity yields an initial placement that forgoes force-based or graph analysis algorithm, reaching a meaningful layout in one pass. When a force algorithm is then applied to this initial mapping, the final layout presents better properties than conventional force-based approaches. Numerical evaluations show a number of advantages of pre-mapping points via projections. User evaluation demonstrates that these tools promote ease of manipulation as well as fast identification of concepts and associations which cannot be easily expressed by conventional graph visualization alone. In order to allow better space usage for complex networks, a graph mapping on the surface of a sphere is also implemented.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis is concerned with the calculation of virtual Compton scattering (VCS) in manifestly Lorentz-invariant baryon chiral perturbation theory to fourth order in the momentum and quark-mass expansion. In the one-photon-exchange approximation, the VCS process is experimentally accessible in photon electro-production and has been measured at the MAMI facility in Mainz, at MIT-Bates, and at Jefferson Lab. Through VCS one gains new information on the nucleon structure beyond its static properties, such as charge, magnetic moments, or form factors. The nucleon response to an incident electromagnetic field is parameterized in terms of 2 spin-independent (scalar) and 4 spin-dependent (vector) generalized polarizabilities (GP). In analogy to classical electrodynamics the two scalar GPs represent the induced electric and magnetic dipole polarizability of a medium. For the vector GPs, a classical interpretation is less straightforward. They are derived from a multipole expansion of the VCS amplitude. This thesis describes the first calculation of all GPs within the framework of manifestly Lorentz-invariant baryon chiral perturbation theory. Because of the comparatively large number of diagrams - 100 one-loop diagrams need to be calculated - several computer programs were developed dealing with different aspects of Feynman diagram calculations. One can distinguish between two areas of development, the first concerning the algebraic manipulations of large expressions, and the second dealing with numerical instabilities in the calculation of one-loop integrals. In this thesis we describe our approach using Mathematica and FORM for algebraic tasks, and C for the numerical evaluations. We use our results for real Compton scattering to fix the two unknown low-energy constants emerging at fourth order. Furthermore, we present the results for the differential cross sections and the generalized polarizabilities of VCS off the proton.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The fourth industrial revolution is paving the way for Industrial Internet of Things applications where industrial assets (e.g., robotic arms, valves, pistons) are equipped with a large number of wireless devices (i.e., microcontroller boards that embed sensors and actuators) to enable a plethora of new applications, such as analytics, diagnostics, monitoring, as well as supervisory, and safety control use-cases. Nevertheless, current wireless technologies, such as Wi-Fi, Bluetooth, and even private 5G networks, cannot fulfill all the requirements set up by the Industry 4.0 paradigm, thus opening up new 6G-oriented research trends, such as the use of THz frequencies. In light of the above, this thesis provides (i) a broad overview of the main use-cases, requirements, and key enabling wireless technologies foreseen by the fourth industrial revolution, and (ii) proposes innovative contributions, both theoretical and empirical, to enhance the performance of current and future wireless technologies at different levels of the protocol stack. In particular, at the physical layer, signal processing techniques are being exploited to analyze two multiplexing schemes, namely Affine Frequency Division Multiplexing and Orthogonal Chirp Division Multiplexing, which seem promising for high-frequency wireless communications. At the medium access layer, three protocols for intra-machine communications are proposed, where one is based on LoRa at 2.4 GHz and the others work in the THz band. Different scheduling algorithms for private industrial 5G networks are compared, and two main proposals are described, i.e., a decentralized scheme that leverages machine learning techniques to better address aperiodic traffic patterns, and a centralized contention-based design that serves a federated learning industrial application. Results are provided in terms of numerical evaluations, simulation results, and real-world experiments. Several improvements over the state-of-the-art were obtained, and the description of up-and-running testbeds demonstrates the feasibility of some of the theoretical concepts when considering a real industry plant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AbstractOBJECTIVECorrelating two unidimensional scales for measurement of self-reported pain intensity for elderly and identifying a preference for one of the scales.METHODA study conducted with 101 elderly people living in Nursing Home who reported any pain and reached ( 13 the scores on the Mini-Mental State Examination. A Numeric Rating Scale - (NRS) of 11 points and a Verbal Descriptor Scale (VDS) of five points were compared in three evaluations: overall, at rest and during movement.RESULTSWomen were more representative (61.4%) and the average age was 77.0±9.1 years. NRS was completed by 94.8% of the elderly while VDS by 100%. The association between the mean scores of NRS with the categories of VDS was significant, indicating convergent validity and a similar metric between the scales.CONCLUSIONPain measurements among institutionalized elderly can be made by NRS and VDS; however, the preferred scale for the elderly was the VDS, regardless of gender.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

4-Dimensional Variational Data Assimilation (4DVAR) assimilates observations through the minimisation of a least-squares objective function, which is constrained by the model flow. We refer to 4DVAR as strong-constraint 4DVAR (sc4DVAR) in this thesis as it assumes the model is perfect. Relaxing this assumption gives rise to weak-constraint 4DVAR (wc4DVAR), leading to a different minimisation problem with more degrees of freedom. We consider two wc4DVAR formulations in this thesis, the model error formulation and state estimation formulation. The 4DVAR objective function is traditionally solved using gradient-based iterative methods. The principle method used in Numerical Weather Prediction today is the Gauss-Newton approach. This method introduces a linearised `inner-loop' objective function, which upon convergence, updates the solution of the non-linear `outer-loop' objective function. This requires many evaluations of the objective function and its gradient, which emphasises the importance of the Hessian. The eigenvalues and eigenvectors of the Hessian provide insight into the degree of convexity of the objective function, while also indicating the difficulty one may encounter while iterative solving 4DVAR. The condition number of the Hessian is an appropriate measure for the sensitivity of the problem to input data. The condition number can also indicate the rate of convergence and solution accuracy of the minimisation algorithm. This thesis investigates the sensitivity of the solution process minimising both wc4DVAR objective functions to the internal assimilation parameters composing the problem. We gain insight into these sensitivities by bounding the condition number of the Hessians of both objective functions. We also precondition the model error objective function and show improved convergence. We show that both formulations' sensitivities are related to error variance balance, assimilation window length and correlation length-scales using the bounds. We further demonstrate this through numerical experiments on the condition number and data assimilation experiments using linear and non-linear chaotic toy models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hurricane is one of the most destructive and costly natural hazard to the built environment and its impact on low-rise buildings, particularity, is beyond acceptable. The major objective of this research was to perform a parametric evaluation of internal pressure (IP) for wind-resistant design of low-rise buildings and wind-driven natural ventilation applications. For this purpose, a multi-scale experimental, i.e. full-scale at Wall of Wind (WoW) and small-scale at Boundary Layer Wind Tunnel (BLWT), and a Computational Fluid Dynamics (CFD) approach was adopted. This provided new capability to assess wind pressures realistically on internal volumes ranging from small spaces formed between roof tiles and its deck to attic to room partitions. Effects of sudden breaching, existing dominant openings on building envelopes as well as compartmentalization of building interior on the IP were systematically investigated. Results of this research indicated: (i) for sudden breaching of dominant openings, the transient overshooting response was lower than the subsequent steady state peak IP and internal volume correction for low-wind-speed testing facilities was necessary. For example a building without volume correction experienced a response four times faster and exhibited 30–40% lower mean and peak IP; (ii) for existing openings, vent openings uniformly distributed along the roof alleviated, whereas one sided openings aggravated the IP; (iii) larger dominant openings exhibited a higher IP on the building envelope, and an off-center opening on the wall exhibited (30–40%) higher IP than center located openings; (iv) compartmentalization amplified the intensity of IP and; (v) significant underneath pressure was measured for field tiles, warranting its consideration during net pressure evaluations. The study aimed at wind driven natural ventilation indicated: (i) the IP due to cross ventilation was 1.5 to 2.5 times higher for Ainlet/Aoutlet>1 compared to cases where Ainlet/Aoutlet<1, this in effect reduced the mixing of air inside the building and hence the ventilation effectiveness; (ii) the presence of multi-room partitioning increased the pressure differential and consequently the air exchange rate. Overall good agreement was found between the observed large-scale, small-scale and CFD based IP responses. Comparisons with ASCE 7-10 consistently demonstrated that the code underestimated peak positive and suction IP.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Below cloud scavenging processes have been investigated considering a numerical simulation, local atmospheric conditions and particulate matter (PM) concentrations, at different sites in Germany. The below cloud scavenging model has been coupled with bulk particulate matter counter TSI (Trust Portacounter dataset, consisting of the variability prediction of the particulate air concentrations during chosen rain events. The TSI samples and meteorological parameters were obtained during three winter Campaigns: at Deuselbach, March 1994, consisting in three different events; Sylt, April 1994 and; Freiburg, March 1995. The results show a good agreement between modeled and observed air concentrations, emphasizing the quality of the conceptual model used in the below cloud scavenging numerical modeling. The results between modeled and observed data have also presented high square Pearson coefficient correlations over 0.7 and significant, except the Freiburg Campaign event. The differences between numerical simulations and observed dataset are explained by the wind direction changes and, perhaps, the absence of advection mass terms inside the modeling. These results validate previous works based on the same conceptual model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mixing layers are present in very different types of physical situations such as atmospheric flows, aerodynamics and combustion. It is, therefore, a well researched subject, but there are aspects that require further studies. Here the instability of two-and three-dimensional perturbations in the compressible mixing layer was investigated by numerical simulations. In the numerical code, the derivatives were discretized using high-order compact finite-difference schemes. A stretching in the normal direction was implemented with both the objective of reducing the sound waves generated by the shear region and improving the resolution near the center. The compact schemes were modified to work with non-uniform grids. Numerical tests started with an analysis of the growth rate in the linear regime to verify the code implementation. Tests were also performed in the non-linear regime and it was possible to reproduce the vortex roll-up and pairing, both in two-and three-dimensional situations. Amplification rate analysis was also performed for the secondary instability of this flow. It was found that, for essentially incompressible flow, maximum growth rates occurred for a spanwise wavelength of approximately 2/3 of the streamwise spacing of the vortices. The result demonstrated the applicability of the theory developed by Pierrehumbet and Widnall. Compressibility effects were then considered and the maximum growth rates obtained for relatively high Mach numbers (typically under 0.8) were also presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years, we have experienced increasing interest in the understanding of the physical properties of collisionless plasmas, mostly because of the large number of astrophysical environments (e. g. the intracluster medium (ICM)) containing magnetic fields that are strong enough to be coupled with the ionized gas and characterized by densities sufficiently low to prevent the pressure isotropization with respect to the magnetic line direction. Under these conditions, a new class of kinetic instabilities arises, such as firehose and mirror instabilities, which have been studied extensively in the literature. Their role in the turbulence evolution and cascade process in the presence of pressure anisotropy, however, is still unclear. In this work, we present the first statistical analysis of turbulence in collisionless plasmas using three-dimensional numerical simulations and solving double-isothermal magnetohydrodynamic equations with the Chew-Goldberger-Low laws closure (CGL-MHD). We study models with different initial conditions to account for the firehose and mirror instabilities and to obtain different turbulent regimes. We found that the CGL-MHD subsonic and supersonic turbulences show small differences compared to the MHD models in most cases. However, in the regimes of strong kinetic instabilities, the statistics, i.e. the probability distribution functions (PDFs) of density and velocity, are very different. In subsonic models, the instabilities cause an increase in the dispersion of density, while the dispersion of velocity is increased by a large factor in some cases. Moreover, the spectra of density and velocity show increased power at small scales explained by the high growth rate of the instabilities. Finally, we calculated the structure functions of velocity and density fluctuations in the local reference frame defined by the direction of magnetic lines. The results indicate that in some cases the instabilities significantly increase the anisotropy of fluctuations. These results, even though preliminary and restricted to very specific conditions, show that the physical properties of turbulence in collisionless plasmas, as those found in the ICM, may be very different from what has been largely believed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Using series solutions and time-domain evolutions, we probe the eikonal limit of the gravitational and scalar-field quasinormal modes of large black holes and black branes in anti-de Sitter backgrounds. These results are particularly relevant for the AdS/CFT correspondence, since the eikonal regime is characterized by the existence of long-lived modes which (presumably) dominate the decay time scale of the perturbations. We confirm all the main qualitative features of these slowly damped modes as predicted by Festuccia and Liu [G. Festuccia and H. Liu, arXiv:0811.1033.] for the scalar-field (tensor-type gravitational) fluctuations. However, quantitatively we find dimensional-dependent correction factors. We also investigate the dependence of the quasinormal mode frequencies on the horizon radius of the black hole (brane) and the angular momentum (wave number) of vector- and scalar-type gravitational perturbations.