882 resultados para multi-method study
Resumo:
This study puts forward a method to model and simulate the complex system of hospital on the basis of multi-agent technology. The formation of the agents of hospitals with intelligent and coordinative characteristics was designed, the message object was defined, and the model operating mechanism of autonomous activities and coordination mechanism was also designed. In addition, the Ontology library and Norm library etc. were introduced using semiotic method and theory, to enlarge the method of system modelling. Swarm was used to develop the multi-agent based simulation system, which is favorable for making guidelines for hospital's improving it's organization and management, optimizing the working procedure, improving the quality of medical care as well as reducing medical charge costs.
Resumo:
Nowadays the changing environment becomes the main challenge for most of organizations, since they have to evaluate proper policies to adapt to the environment. In this paper, we propose a multi-agent simulation method to evaluate policies based on complex adaptive system theory. Furthermore, we propose a semiotic EDA (Epistemic, Deontic, Axiological) agent model to simulate agent's behavior in the system by incorporating the social norms reflecting the policy. A case study is also provided to validate our approach. Our research present better adaptability and validity than the qualitative analysis and experiment approach and the semiotic agent model provides high creditability to simulate agents' behavior.
Resumo:
Forests are a store of carbon and an eco-system that continually removes carbon dioxide from the atmosphere. If they are sustainably managed, the carbon store can be maintained at a constant level, while the trees removed and converted to timber products can form an additional long term carbon store. The total carbon store in the forest and associated ‘wood chain’ therefore increases over time, given appropriate management. This increasing carbon store can be further enhanced with afforestation. The UK’s forest area has increased continually since the early 1900s, although the rate of increase has declined since its peak in the late 1980s, and it is a similar picture in the rest of Europe. The increased sustainable use of timber in construction is a key market incentive for afforestation, which can make a significant contribution to reducing carbon emissions. The case study presented in this paper demonstrates the carbon benefits of a Cross Laminated Timber (CLT) solution for a multi-storey residential building in comparison with a more conventional reinforced concrete solution. The embodied carbon of the building up to completion of construction is considered, together with the stored carbon during the life of the building and the impact of different end of life scenarios. The results of the study show that the total stored carbon in the CLT structural frame is 1215tCO2 (30tCO2 per housing unit). The choice of treatment at end of life has a significant effect on the whole life embodied carbon of the CLT frame, which ranges from -1017 tCO2e for re-use to +153tCO2e for incinerate without energy recovery. All end of life scenarios considered result in lower total CO2e emissions for the CLT frame building compared with the reinforced concrete frame solution.
Resumo:
This paper presents a software-based study of a hardware-based non-sorting median calculation method on a set of integer numbers. The method divides the binary representation of each integer element in the set into bit slices in order to find the element located in the middle position. The method exhibits a linear complexity order and our analysis shows that the best performance in execution time is obtained when slices of 4-bit in size are used for 8-bit and 16-bit integers, in mostly any data set size. Results suggest that software implementation of bit slice method for median calculation outperforms sorting-based methods with increasing improvement for larger data set size. For data set sizes of N > 5, our simulations show an improvement of at least 40%.
Resumo:
The multicomponent nonideal gas lattice Boltzmann model by Shan and Chen (S-C) is used to study the immiscible displacement in a sinusoidal tube. The movement of interface and the contact point (contact line in three-dimension) is studied. Due to the roughness of the boundary, the contact point shows "stick-slip" mechanics. The "stick-slip" effect decreases as the speed of the interface increases. For fluids that are nonwetting, the interface is almost perpendicular to the boundaries at most time, although its shapes at different position of the tube are rather different. When the tube becomes narrow, the interface turns a complex curves rather than remains simple menisci. The velocity is found to vary considerably between the neighbor nodes close to the contact point, consistent with the experimental observation that the velocity is multi-values on the contact line. Finally, the effect of three boundary conditions is discussed. The average speed is found different for different boundary conditions. The simple bounce-back rule makes the contact point move fastest. Both the simple bounce-back and the no-slip bounce-back rules are more sensitive to the roughness of the boundary in comparison with the half-way bounce-back rule. The simulation results suggest that the S-C model may be a promising tool in simulating the displacement behaviour of two immiscible fluids in complex geometry.
Resumo:
This study describes a simple technique that improves a recently developed 3D sub-diffraction imaging method based on three-photon absorption of commercially available quantum dots. The method combines imaging of biological samples via tri-exciton generation in quantum dots with deconvolution and spectral multiplexing, resulting in a novel approach for multi-color imaging of even thick biological samples at a 1.4 to 1.9-fold better spatial resolution. This approach is realized on a conventional confocal microscope equipped with standard continuous-wave lasers. We demonstrate the potential of multi-color tri-exciton imaging of quantum dots combined with deconvolution on viral vesicles in lentivirally transduced cells as well as intermediate filaments in three-dimensional clusters of mouse-derived neural stem cells (neurospheres) and dense microtubuli arrays in myotubes formed by stacks of differentiated C2C12 myoblasts.
Resumo:
An efficient and robust method to measure vitamin D (25-hydroxy vitamin D3 (25(OH)D3) and 25-hydroxy vitamin D2 in dried blood spots (DBS) has been developed and applied in the pan-European multi-centre, internet-based, personalised nutrition intervention study Food4Me. The method includes calibration with blood containing endogenous 25(OH)D3, spotted as DBS and corrected for haematocrit content. The methodology was validated following international standards. The performance characteristics did not reach those of the current gold standard liquid chromatography-MS/MS in plasma for all parameters, but were found to be very suitable for status-level determination under field conditions. DBS sample quality was very high, and 3778 measurements of 25(OH)D3 were obtained from 1465 participants. The study centre and the season within the study centre were very good predictors of 25(OH)D3 levels (P<0·001 for each case). Seasonal effects were modelled by fitting a sine function with a minimum 25(OH)D3 level on 20 January and a maximum on 21 July. The seasonal amplitude varied from centre to centre. The largest difference between winter and summer levels was found in Germany and the smallest in Poland. The model was cross-validated to determine the consistency of the predictions and the performance of the DBS method. The Pearson's correlation between the measured values and the predicted values was r 0·65, and the sd of their differences was 21·2 nmol/l. This includes the analytical variation and the biological variation within subjects. Overall, DBS obtained by unsupervised sampling of the participants at home was a viable methodology for obtaining vitamin D status information in a large nutritional study.
Resumo:
A statistical-dynamical downscaling method is used to estimate future changes of wind energy output (Eout) of a benchmark wind turbine across Europe at the regional scale. With this aim, 22 global climate models (GCMs) of the Coupled Model Intercomparison Project Phase 5 (CMIP5) ensemble are considered. The downscaling method uses circulation weather types and regional climate modelling with the COSMO-CLM model. Future projections are computed for two time periods (2021–2060 and 2061–2100) following two scenarios (RCP4.5 and RCP8.5). The CMIP5 ensemble mean response reveals a more likely than not increase of mean annual Eout over Northern and Central Europe and a likely decrease over Southern Europe. There is some uncertainty with respect to the magnitude and the sign of the changes. Higher robustness in future changes is observed for specific seasons. Except from the Mediterranean area, an ensemble mean increase of Eout is simulated for winter and a decreasing for the summer season, resulting in a strong increase of the intra-annual variability for most of Europe. The latter is, in particular, probable during the second half of the 21st century under the RCP8.5 scenario. In general, signals are stronger for 2061–2100 compared to 2021–2060 and for RCP8.5 compared to RCP4.5. Regarding changes of the inter-annual variability of Eout for Central Europe, the future projections strongly vary between individual models and also between future periods and scenarios within single models. This study showed for an ensemble of 22 CMIP5 models that changes in the wind energy potentials over Europe may take place in future decades. However, due to the uncertainties detected in this research, further investigations with multi-model ensembles are needed to provide a better quantification and understanding of the future changes.
Resumo:
What this paper adds? What is already known on the subject? Multi-sensory treatment approaches have been shown to impact outcome measures positively, such as accuracy of speech movement patterns and speech intelligibility in adults with motor speech disorders, as well as in children with apraxia of speech, autism and cerebral palsy. However, there has been no empirical study using multi-sensory treatment for children with speech sound disorders (SSDs) who demonstrate motor control issues in the jaw and orofacial structures (e.g. jaw sliding, jaw over extension, inadequate lip rounding/retraction and decreased integration of speech movements). What this paper adds? Findings from this study indicate that, for speech production disorders where both the planning and production of spatiotemporal parameters of movement sequences for speech are disrupted, multi-sensory treatment programmes that integrate auditory, visual and tactile–kinesthetic information improve auditory and visual accuracy of speech production. The training (practised in treatment) and test words (not practised in treatment) both demonstrated positive change in most participants, indicating generalization of target features to untrained words. It is inferred that treatment that focuses on integrating multi-sensory information and normalizing parameters of speech movements is an effective method for treating children with SSDs who demonstrate speech motor control issues.
Resumo:
Scope: The use of biomarkers in the objective assessment of dietary intake is a high priority in nutrition research. The aim of this study was to examine pentadecanoic acid (C15:0) and heptadecanoic acid (C17:0) as biomarkers of dairy foods intake. Methods and results: The data used in the present study were obtained as part of the Food4me Study. Estimates of C15:0 and C17:0 from dried blood spots and intakes of dairy from an FFQ were obtained from participants (n=1,180) across 7 countries. Regression analyses were used to explore associations of biomarkers with dairy intake levels and receiver operating characteristic (ROC) analyses were used to evaluate the fatty acids. Significant positive associations were found between C15:0 and total intakes of high-fat dairy products. C15:0 showed good ability to distinguish between low and high consumers of high-fat dairy products. Conclusion: C15:0 can be used as a biomarker of high-fat dairy intake and of specific high-fat dairy products. Both C15:0 and C17:0 performed poorly for total dairy intake highlighting the need for caution when using these in epidemiological studies.
Resumo:
P>Aim To evaluate ex vivo the accuracy of the iPex multi-frequency electronic apex locator (NSK Ltd, Tokyo, Japan) for working length determination in primary molar teeth. Methodology One calibrated examiner determined the working length in 20 primary molar teeth (total of 33 root canals). Working length was measured both visually, with the placement of a K-file 1 mm short of the apical foramen or the most coronal limit of root resorption, and electronically using the electronic apex locator iPex, according to the manufacturers` instructions. Data were analysed statistically using the intraclass correlation (ICC) test. Results Comparison of the actual and the electronic measurements revealed high correlation (ICC = 0.99) between the methods, regardless of the presence or absence of physiological root resorption. Conclusions In this laboratory study, the iPex accurately identified the apical foramen or the apical opening location for working length measurement in primary molar teeth.
Resumo:
The aim of this preliminary work was to present a novel method, suitable to investigate the glass cooling, from melt to solid state, based on a fast, non-usual and easy microwave method. The following glass system xBaO . (100-x)B(2)O(3) (x = 0% and 40%) was selected as an example for this study. The melt was poured inside a piece of waveguide and then, its cooling was monitored by the microwave signal as a function of time. The variations in the signal can provide valuable informations about some structural changes that take place during the cooling stages, such as relaxation processes. This method can be useful to investigate the cooling and heating of other materials, opening new possibilities for investigation of dielectric behavior of materials under high temperatures. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
We have developed a spectrum synthesis method for modeling the ultraviolet (UV) emission from the accretion disk from cataclysmic variables (CVs). The disk is separated into concentric rings, with an internal structure from the Wade & Hubeny disk-atmosphere models. For each ring, a wind atmosphere is calculated in the comoving frame with a vertical velocity structure obtained from a solution of the Euler equation. Using simple assumptions, regarding rotation and the wind streamlines, these one-dimensional models are combined into a single 2.5-dimensional model for which we compute synthetic spectra. We find that the resulting line and continuum behavior as a function of the orbital inclination is consistent with the observations, and verify that the accretion rate affects the wind temperature, leading to corresponding trends in the intensity of UV lines. In general, we also find that the primary mass has a strong effect on the P Cygni absorption profiles, the synthetic emission line profiles are strongly sensitive to the wind temperature structure, and an increase in the mass-loss rate enhances the resonance line intensities. Synthetic spectra were compared with UV data for two high orbital inclination nova-like CVs-RW Tri and V347 Pup. We needed to include disk regions with arbitrary enhanced mass loss to reproduce reasonably well widths and line profiles. This fact and a lack of flux in some high ionization lines may be the signature of the presence of density-enhanced regions in the wind, or alternatively, may result from inadequacies in some of our simplifying assumptions.
Resumo:
Evidence of jet precession in many galactic and extragalactic sources has been reported in the literature. Much of this evidence is based on studies of the kinematics of the jet knots, which depends on the correct identification of the components to determine their respective proper motions and position angles on the plane of the sky. Identification problems related to fitting procedures, as well as observations poorly sampled in time, may influence the follow-up of the components in time, which consequently might contribute to a misinterpretation of the data. In order to deal with these limitations, we introduce a very powerful statistical tool to analyse jet precession: the cross-entropy method for continuous multi-extremal optimization. Only based on the raw data of the jet components (right ascension and declination offsets from the core), the cross-entropy method searches for the precession model parameters that better represent the data. In this work we present a large number of tests to validate this technique, using synthetic precessing jets built from a given set of precession parameters. With the aim of recovering these parameters, we applied the cross-entropy method to our precession model, varying exhaustively the quantities associated with the method. Our results have shown that even in the most challenging tests, the cross-entropy method was able to find the correct parameters within a 1 per cent level. Even for a non-precessing jet, our optimization method could point out successfully the lack of precession.
Resumo:
Nuclear (p,alpha) reactions destroying the so-called ""light-elements"" lithium, beryllium and boron have been largely studied in the past mainly because their role in understanding some astrophysical phenomena, i.e. mixing-phenomena occurring in young F-G stars [1]. Such mechanisms transport the surface material down to the region close to the nuclear destruction zone, where typical temperatures of the order of similar to 10(6) K are reached. The corresponding Gamow energy E(0)=1.22 (Z(x)(2)Z(X)(2)T(6)(2))(1/3) [2] is about similar to 10 keV if one considers the ""boron-case"" and replaces in the previous formula Z(x) = 1, Z(X) = 5 and T(6) = 5. Direct measurements of the two (11)B(p,alpha(0))(8)Be and (10)B(p,alpha)(7)Be reactions in correspondence of this energy region are difficult to perform mainly because the combined effects of Coulomb barrier penetrability and electron screening [3]. The indirect method of the Trojan Horse (THM) [4-6] allows one to extract the two-body reaction cross section of interest for astrophysics without the extrapolation-procedures. Due to the THM formalism, the extracted indirect data have to be normalized to the available direct ones at higher energies thus implying that the method is a complementary tool in solving some still open questions for both nuclear and astrophysical issues [7-12].