978 resultados para Network measurement


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many countries formal or informal palliative care networks (PCNs) have evolved to better integrate community-based services for individuals with a life-limiting illness. We conducted a cross-sectional survey using a customized tool to determine the perceptions of the processes of palliative care delivery reflective of horizontal integration from the perspective of nurses, physicians and allied health professionals working in a PCN, as well as to assess the utility of this tool. The process elements examined were part of a conceptual framework for evaluating integration of a system of care and centred on interprofessional collaboration. We used the Index of Interdisciplinary Collaboration (IIC) as a basis of measurement. The 86 respondents (85% response rate) placed high value on working collaboratively and most reported being part of an interprofessional team. The survey tool showed utility in identifying strengths and gaps in integration across the network and in detecting variability in some factors according to respondent agency affiliation and profession. Specifically, support for interprofessional communication and evaluative activities were viewed as insufficient. Impediments to these aspects of horizontal integration may be reflective of workload constraints, differences in agency operations or an absence of key structural features.


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In modern measurement and control systems, the available time and resources are often not only limited, but could change during the operation of the system. In these cases, the so-called anytime algorithms could be used advantageously. While diflerent soft computing methods are wide-spreadly used in system modeling, their usability in these cases are limited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the increasing complexity of current networks, it became evident the need for Self-Organizing Networks (SON), which aims to automate most of the associated radio planning and optimization tasks. Within SON, this paper aims to optimize the Neighbour Cell List (NCL) for Long Term Evolution (LTE) evolved NodeBs (eNBs). An algorithm composed by three decisions were were developed: distance-based, Radio Frequency (RF) measurement-based and Handover (HO) stats-based. The distance-based decision, proposes a new NCL taking account the eNB location and interference tiers, based in the quadrants method. The last two algorithms consider signal strength measurements and HO statistics, respectively; they also define a ranking to each eNB and neighbour relation addition/removal based on user defined constraints. The algorithms were developed and implemented over an already existent radio network optimization professional tool. Several case studies were produced using real data from a Portuguese LTE mobile operator. © 2014 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En opération depuis 2008, l’expérience ATLAS est la plus grande de toutes les expériences au LHC. Les détecteurs ATLAS- MPX (MPX) installés dans ATLAS sont basés sur le détecteur au silicium à pixels Medipix2 qui a été développé par la collaboration Medipix au CERN pour faire de l’imagerie en temps réel. Les détecteurs MPX peuvent être utilisés pour mesurer la luminosité. Ils ont été installés à seize différents endroits dans les zones expérimentale et technique d’ATLAS en 2008. Le réseau MPX a recueilli avec succès des données indépendamment de la chaîne d’enregistrement des données ATLAS de 2008 à 2013. Chaque détecteur MPX fournit des mesures de la luminosité intégrée du LHC. Ce mémoire décrit la méthode d’étalonnage de la luminosité absolue mesurée avec les détectors MPX et la performance des détecteurs MPX pour les données de luminosité en 2012. Une constante d’étalonnage de la luminosité a été déterminée. L’étalonnage est basé sur technique de van der Meer (vdM). Cette technique permet la mesure de la taille des deux faisceaux en recouvrement dans le plan vertical et horizontal au point d’interaction d’ATLAS (IP1). La détermination de la luminosité absolue nécessite la connaissance précise de l’intensité des faisceaux et du nombre de trains de particules. Les trois balayages d’étalonnage ont été analysés et les résultats obtenus par les détecteurs MPX ont été comparés aux autres détecteurs d’ATLAS dédiés spécifiquement à la mesure de la luminosité. La luminosité obtenue à partir des balayages vdM a été comparée à la luminosité des collisions proton- proton avant et après les balayages vdM. Le réseau des détecteurs MPX donne des informations fiables pour la détermination de la luminosité de l’expérience ATLAS sur un large intervalle (luminosité de 5 × 10^29 cm−2 s−1 jusqu’à 7 × 10^33 cm−2 s−1 .

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our goal in this paper is to assess reliability and validity of egocentered network data using multilevel analysis (Muthen, 1989, Hox, 1993) under the multitrait-multimethod approach. The confirmatory factor analysis model for multitrait-multimethod data (Werts & Linn, 1970; Andrews, 1984) is used for our analyses. In this study we reanalyse a part of data of another study (Kogovšek et al., 2002) done on a representative sample of the inhabitants of Ljubljana. The traits used in our article are the name interpreters. We consider egocentered network data as hierarchical; therefore a multilevel analysis is required. We use Muthen's partial maximum likelihood approach, called pseudobalanced solution (Muthen, 1989, 1990, 1994) which produces estimations close to maximum likelihood for large ego sample sizes (Hox & Mass, 2001). Several analyses will be done in order to compare this multilevel analysis to classic methods of analysis such as the ones made in Kogovšek et al. (2002), who analysed the data only at group (ego) level considering averages of all alters within the ego. We show that some of the results obtained by classic methods are biased and that multilevel analysis provides more detailed information that much enriches the interpretation of reliability and validity of hierarchical data. Within and between-ego reliabilities and validities and other related quality measures are defined, computed and interpreted

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Compositional data, also called multiplicative ipsative data, are common in survey research instruments in areas such as time use, budget expenditure and social networks. Compositional data are usually expressed as proportions of a total, whose sum can only be 1. Owing to their constrained nature, statistical analysis in general, and estimation of measurement quality with a confirmatory factor analysis model for multitrait-multimethod (MTMM) designs in particular are challenging tasks. Compositional data are highly non-normal, as they range within the 0-1 interval. One component can only increase if some other(s) decrease, which results in spurious negative correlations among components which cannot be accounted for by the MTMM model parameters. In this article we show how researchers can use the correlated uniqueness model for MTMM designs in order to evaluate measurement quality of compositional indicators. We suggest using the additive log ratio transformation of the data, discuss several approaches to deal with zero components and explain how the interpretation of MTMM designs di ers from the application to standard unconstrained data. We show an illustration of the method on data of social network composition expressed in percentages of partner, family, friends and other members in which we conclude that the faceto-face collection mode is generally superior to the telephone mode, although primacy e ects are higher in the face-to-face mode. Compositions of strong ties (such as partner) are measured with higher quality than those of weaker ties (such as other network members)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud optical depth is one of the most poorly observed climate variables. The new “cloud mode” capability in the Aerosol Robotic Network (AERONET) will inexpensively yet dramatically increase cloud optical depth observations in both number and accuracy. Cloud mode optical depth retrievals from AERONET were evaluated at the Atmospheric Radiation Measurement program’s Oklahoma site in sky conditions ranging from broken clouds to overcast. For overcast cases, the 1.5 min average AERONET cloud mode optical depths agreed to within 15% of those from a standard ground‐based flux method. For broken cloud cases, AERONET retrievals also captured rapid variations detected by the microwave radiometer. For 3 year climatology derived from all nonprecipitating clouds, AERONET monthly mean cloud optical depths are generally larger than cloud radar retrievals because of the current cloud mode observation strategy that is biased toward measurements of optically thick clouds. This study has demonstrated a new way to enhance the existing AERONET infrastructure to observe cloud optical properties on a global scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report on the first realtime ionospheric predictions network and its capabilities to ingest a global database and forecast F-layer characteristics and "in situ" electron densities along the track of an orbiting spacecraft. A global network of ionosonde stations reported around-the-clock observations of F-region heights and densities, and an on-line library of models provided forecasting capabilities. Each model was tested against the incoming data; relative accuracies were intercompared to determine the best overall fit to the prevailing conditions; and the best-fit model was used to predict ionospheric conditions on an orbit-to-orbit basis for the 12-hour period following a twice-daily model test and validation procedure. It was found that the best-fit model often provided averaged (i.e., climatologically-based) accuracies better than 5% in predicting the heights and critical frequencies of the F-region peaks in the latitudinal domain of the TSS-1R flight path. There was a sharp contrast however, in model-measurement comparisons involving predictions of actual, unaveraged, along-track densities at the 295 km orbital altitude of TSS-1R In this case, extrema in the first-principle models varied by as much as an order of magnitude in density predictions, and the best-fit models were found to disagree with the "in situ" observations of Ne by as much as 140%. The discrepancies are interpreted as a manifestation of difficulties in accurately and self-consistently modeling the external controls of solar and magnetospheric inputs and the spatial and temporal variabilities in electric fields, thermospheric winds, plasmaspheric fluxes, and chemistry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Finnish Meteorological Institute, in collaboration with the University of Helsinki, has established a new ground-based remote-sensing network in Finland. The network consists of five topographically, ecologically and climatically different sites distributed from southern to northern Finland. The main goal of the network is to monitor air pollution and boundary layer properties in near real time, with a Doppler lidar and ceilometer at each site. In addition to these operational tasks, two sites are members of the Aerosols, Clouds and Trace gases Research InfraStructure Network (ACTRIS); a Ka band cloud radar at Sodankylä will provide cloud retrievals within CloudNet, and a multi-wavelength Raman lidar, PollyXT (POrtabLe Lidar sYstem eXTended), in Kuopio provides optical and microphysical aerosol properties through EARLINET (the European Aerosol Research Lidar Network). Three C-band weather radars are located in the Helsinki metropolitan area and are deployed for operational and research applications. We performed two inter-comparison campaigns to investigate the Doppler lidar performance, compare the backscatter signal and wind profiles, and to optimize the lidar sensitivity through adjusting the telescope focus length and data-integration time to ensure sufficient signal-to-noise ratio (SNR) in low-aerosol-content environments. In terms of statistical characterization, the wind-profile comparison showed good agreement between different lidars. Initially, there was a discrepancy in the SNR and attenuated backscatter coefficient profiles which arose from an incorrectly reported telescope focus setting from one instrument, together with the need to calibrate. After diagnosing the true telescope focus length, calculating a new attenuated backscatter coefficient profile with the new telescope function and taking into account calibration, the resulting attenuated backscatter profiles all showed good agreement with each other. It was thought that harsh Finnish winters could pose problems, but, due to the built-in heating systems, low ambient temperatures had no, or only a minor, impact on the lidar operation – including scanning-head motion. However, accumulation of snow and ice on the lens has been observed, which can lead to the formation of a water/ice layer thus attenuating the signal inconsistently. Thus, care must be taken to ensure continuous snow removal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The urban heat island is a well-known phenomenon that impacts a wide variety of city operations. With greater availability of cheap meteorological sensors, it is possible to measure the spatial patterns of urban atmospheric characteristics with greater resolution. To develop robust and resilient networks, recognizing sensors may malfunction, it is important to know when measurement points are providing additional information and also the minimum number of sensors needed to provide spatial information for particular applications. Here we consider the example of temperature data, and the urban heat island, through analysis of a network of sensors in the Tokyo metropolitan area (Extended METROS). The effect of reducing observation points from an existing meteorological measurement network is considered, using random sampling and sampling with clustering. The results indicated the sampling with hierarchical clustering can yield similar temperature patterns with up to a 30% reduction in measurement sites in Tokyo. The methods presented have broader utility in evaluating the robustness and resilience of existing urban temperature networks and in how networks can be enhanced by new mobile and open data sources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increase in the importance of intangibles in business competitiveness has made investment selection more challenging to investors that, under high information asymmetry, tend to charge higher premiums to provide capital or simply deny it. Private Equity and Venture Capital (PE/VC) organizations developed contemporarily with the increase in the relevance of intangible assets in the economy. They form a specialized breed of financial intermediaries that are better prepared to deal with information asymmetry. This paper is the result of ten interviews with PE/VC organizations in Brazil. Its objective is to describe the selection process, criteria and indicators used by these organizations to identify and measure intangible assets, as well as the methods used to valuate prospective investments. Results show that PE/VC organizations rely on sophisticated methods to assess investment proposals, with specific criteria and indicators to assess the main classes of intangible assets. However, no value is given to these assets individually. The information gathered is used to understand the sources of cash flows and risks, which are then combined by discounted cash flow methods to estimate firm's value. Due to PE/VC organizations extensive experience with innovative Small and Medium-sized Enterprises (SMEs), we believe that shedding light on how PE/VC organizations deal with intangible assets brings important insights to the intangible assets debate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Continuing development of new materials makes systems lighter and stronger permitting more complex systems to provide more functionality and flexibility that demands a more effective evaluation of their structural health. Smart material technology has become an area of increasing interest in this field. The combination of smart materials and artificial neural networks can be used as an excellent tool for pattern recognition, turning their application adequate for monitoring and fault classification of equipment and structures. In order to identify the fault, the neural network must be trained using a set of solutions to its corresponding forward Variational problem. After the training process, the net can successfully solve the inverse variational problem in the context of monitoring and fault detection because of their pattern recognition and interpolation capabilities. The use of structural frequency response function is a fundamental portion of structural dynamic analysis, and it can be extracted from measured electric impedance through the electromechanical interaction of a piezoceramic and a structure. In this paper we use the FRF obtained by a mathematical model (FEM) in order to generate the training data for the neural networks, and the identification of damage can be done by measuring electric impedance, since suitable data normalization correlates FRF and electrical impedance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work studies the capability of generalization of Neural Network using vibration based measurement data aiming at operating condition and health monitoring of mechanical systems. The procedure uses the backpropagation algorithm to classify the input patters of a system with different stiffness ratios. It has been investigated a large set of input data, containing various stiffness ratios as well as a reduced set containing only the extreme ones in order to study generalizing capability of the network. This allows to definition of Neural Networks capable to use a reduced set of data during the training phase. Once it is successfully trained, it could identify intermediate failure condition. Several conditions and intensities of damages have been studied by using numerical data. The Neural Network demonstrated a good capacity of generalization for all case. Finally, the proposal was tested with experimental data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A measurement of the top quark pair production cross section in proton antiproton collisions at an interaction energy of root s=1.96 TeV is presented. This analysis uses 405 +/- 25 pb(-1) of data collected with the D0 detector at the Fermilab Tevatron Collider. Fully hadronic t (t) over bar decays with final states of six or more jets are separated from the multijet background using secondary vertex tagging and a neural network. The t (t) over bar cross section is measured as sigma(t (t) over bar)=4.5(-1.9)(+2.0)(stat)(-1.1)(+1.4)(syst)+/- 0.3(lumi) pb for a top quark mass of m(t)=175 GeV/c(2).