920 resultados para method of images
Resumo:
The portfolio generating the iTraxx EUR index is modeled by coupled Markov chains. Each of the industries of the portfolio evolves according to its own Markov transition matrix. Using a variant of the method of moments, the model parameters are estimated from a data set of Standard and Poor's. Swap spreads are evaluated by Monte-Carlo simulations. Along with an actuarially fair spread, at least squares spread is considered.
Resumo:
OBJECTIVE: To test discriminant analysis as a method of turning the information of a routine customer satisfaction survey (CSS) into a more accurate decision-making tool. METHODS: A 7-question, 10-multiple choice, self-applied questionnaire was used to study a sample of patients seen in two outpatient care units in Valparaíso, Chile, one of primary care (n=100) and the other of secondary care (n=249). Two cutting points were considered in the dependent variable (final satisfaction score): satisfied versus unsatisfied, and very satisfied versus all others. Results were compared with empirical measures (proportion of satisfied individuals, proportion of unsatisfied individuals and size of the median). RESULTS: The response rate was very high, over 97.0% in both units. A new variable, medical attention, was revealed, as explaining satisfaction at the primary care unit. The proportion of the total variability explained by the model was very high (over 99.4%) in both units, when comparing satisfied with unsatisfied customers. In the analysis of very satisfied versus all other customers, significant relationship was identified only in the case of the primary care unit, which explained a small proportion of the variability (41.9%). CONCLUSIONS: Discriminant analysis identified relationships not revealed by the previous analysis. It provided information about the proportion of the variability explained by the model. It identified non-significant relationships suggested by empirical analysis (e.g. the case of the relation very satisfied versus others in the secondary care unit). It measured the contribution of each independent variable to the explanation of the variation of the dependent one.
Resumo:
n this paper we make an exhaustive study of the fourth order linear operator u((4)) + M u coupled with the clamped beam conditions u(0) = u(1) = u'(0) = u'(1) = 0. We obtain the exact values on the real parameter M for which this operator satisfies an anti-maximum principle. Such a property is equivalent to the fact that the related Green's function is nonnegative in [0, 1] x [0, 1]. When M < 0 we obtain the best estimate by means of the spectral theory and for M > 0 we attain the optimal value by studying the oscillation properties of the solutions of the homogeneous equation u((4)) + M u = 0. By using the method of lower and upper solutions we deduce the existence of solutions for nonlinear problems coupled with this boundary conditions. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents a methodology for distribution networks reconfiguration in outage presence in order to choose the reconfiguration that presents the lower power losses. The methodology is based on statistical failure and repair data of the distribution power system components and uses fuzzy-probabilistic modelling for system component outage parameters. Fuzzy membership functions of system component outage parameters are obtained by statistical records. A hybrid method of fuzzy set and Monte Carlo simulation based on the fuzzy-probabilistic models allows catching both randomness and fuzziness of component outage parameters. Once obtained the system states by Monte Carlo simulation, a logical programming algorithm is applied to get all possible reconfigurations for every system state. In order to evaluate the line flows and bus voltages and to identify if there is any overloading, and/or voltage violation a distribution power flow has been applied to select the feasible reconfiguration with lower power losses. To illustrate the application of the proposed methodology to a practical case, the paper includes a case study that considers a real distribution network.
Resumo:
It is difficult to get the decision about an opinion after many users get the meeting in same place. It used to spend too much time in order to find solve some problem because of the various opinions of each other. TAmI (Group Decision Making Toolkit) is the System to Group Decision in Ambient Intelligence [1]. This program was composed with IGATA [2], WebMeeting and the related Database system. But, because it is sent without any encryption in IP / Password, it can be opened to attacker. They can use the IP / Password to the bad purpose. As the result, although they make the wrong result, the joined member can’t know them. Therefore, in this paper, we studied the applying method of user’s authentication into TAmI.
Resumo:
The main idea of the article is to consider the interdependence between Politics of Memory (as a type of narrating the Past) and Stereotyping. The author suggests that, in a time of information revolution, we are still constructing images of others on the basis of simplification, overestimation of association between features, and illusory correlations, instead of basing them on knowledge and personal contact. The Politics of Memory, national remembrance, and the historical consciousness play a significant role in these processes, because – as the author argues – they transform historically based 'symbolic analogies' into 'illusory correlations' between national identity and the behavior of its members. To support his theoretical investigation, the author presents results of his draft experiment and two case studies: (a) a social construction of images of neighbors based on Polish narrations about the Past; and (b) various processes of stereotyping based on the Remembrance of the Holocaust. All these considerations lead him to state that the Politics of Memory should be recognized as an influential source of commonly shared stereotypes on other cultures and nations.
Resumo:
: In this work we derive an analytical solution given by Bessel series to the transient and one-dimensional (1D) bioheat transfer equation in a multi-layer region with spatially dependent heat sources. Each region represents an independent biological tissue characterized by temperature-invariant physiological parameters and a linearly temperature dependent metabolic heat generation. Moreover, 1D Cartesian, cylindrical or spherical coordinates are used to define the geometry and temperature boundary conditions of first, second and third kinds are assumed at the inner and outer surfaces. We present two examples of clinical applications for the developed solution. In the first one, we investigate two different heat source terms to simulate the heating in a tumor and its surrounding tissue, induced during a magnetic fluid hyperthermia technique used for cancer treatment. To obtain an accurate analytical solution, we determine the error associated with the truncated Bessel series that defines the transient solution. In the second application, we explore the potential of this model to study the effect of different environmental conditions in a multi-layered human head model (brain, bone and scalp). The convective heat transfer effect of a large blood vessel located inside the brain is also investigated. The results are further compared with a numerical solution obtained by the Finite Element Method and computed with COMSOL Multi-physics v4.1 (c). (c) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Introduction: The quantification of th e differential renal function in adults can be difficult due to many factors - on e of the se is the variances in kidney depth and the attenuation related with all the tissue s between the kidney and the camera. Some authors refer that t he lower attenuation i n p ediatric patients makes unnecessary the use of attenuation correction algorithms. This study will com pare the values of differential renal function obtained with and with out attenuation correction techniques . Material and Methods: Images from a group consisting of 15 individuals (aged 3 years +/ - 2) were used and two attenuation correction method s were applied – Tonnesen correction factors and the geometric mean method . The mean time of acquisition (time post 99m Tc - DMSA administration) was 3.5 hours +/ - 0.8h. Results: T he absence of any method of attenuation correction apparently seems to lead to consistent values that seem to correlate well with the ones obtained with the incorporation of methods of attenuation correction . The differences found between the values obtained with and without attenuation correction were not significant. Conclusion: T he decision of not doing any kind of attenuation correction method can apparently be justified by the minor differences verified on the relative kidney uptake values. Nevertheless, if it is recognized that there is a need for a really accurate value of the relative kidney uptake, then an attenuation correction method should be used.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores
Resumo:
This article is is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. Attribution-NonCommercial (CC BY-NC) license lets others remix, tweak, and build upon work non-commercially, and although the new works must also acknowledge & be non-commercial.
Resumo:
The mechanisms of speech production are complex and have been raising attention from researchers of both medical and computer vision fields. In the speech production mechanism, the articulator’s study is a complex issue, since they have a high level of freedom along this process, namely the tongue, which instigates a problem in its control and observation. In this work it is automatically characterized the tongues shape during the articulation of the oral vowels of Portuguese European by using statistical modeling on MR-images. A point distribution model is built from a set of images collected during artificially sustained articulations of Portuguese European sounds, which can extract the main characteristics of the motion of the tongue. The model built in this work allows under standing more clearly the dynamic speech events involved during sustained articulations. The tongue shape model built can also be useful for speech rehabilitation purposes, specifically to recognize the compensatory movements of the articulators during speech production.
Resumo:
Purpose - To compare the image quality and effective dose applying the 10 kVp rule with manual mode acquisition and AEC mode in PA chest X-ray. Method - 68 images (with and without lesions) were acquired using an anthropomorphic chest phantom using a Wolverson Arcoma X-ray unit. These images were compared against a reference image using the 2 alternative forced choice (2AFC) method. The effective dose (E) was calculated using PCXMC software using the exposure parameters and the DAP. The exposure index (lgM provided by Agfa systems) was recorded. Results - Exposure time decreases more when applying the 10 kVp rule with manual mode (50%–28%) when compared with automatic mode (36%–23%). Statistical differences for E between several ionization chambers' combinations for AEC mode were found (p = 0.002). E is lower when using only the right AEC ionization chamber. Considering the image quality there are no statistical differences (p = 0.348) between the different ionization chambers' combinations for AEC mode for images with no lesions. Considering lgM values, it was demonstrated that they were higher when the AEC mode was used compared to the manual mode. It was also observed that lgM values obtained with AEC mode increased as kVp value went up. The image quality scores did not demonstrate statistical significant differences (p = 0.343) for the images with lesions comparing manual with AEC mode. Conclusion - In general the E is lower when manual mode is used. By using the right AEC ionising chamber under the lung the E will be the lowest in comparison to other ionising chambers. The use of the 10 kVp rule did not affect the visibility of the lesions or image quality.
Resumo:
A flow injection analysis (FIA) system comprising a cysteine selective electrode as detection system was developed for determination of this amino acid in pharmaceuticals. Several electrodes were constructed for this purpose, having PVC membranes with different ionic exchangers and mediator solvents. Better working characteristics were attained with membranes comprising o-nitrophenyl octyl ether as mediator solvent and a tetraphenylborate based ionic-sensor. Injection of 500 µL standard solutions into an ionic strength adjuster carrier (3x10-3 M) of barium chloride flowing at 2.4mL min-1, showed linearity ranges from 5.0x10-5 to 5.0x10-3 M, with slopes of 76.4±0.6mV decade-1 and R2>0.9935. Slope decreased significantly under the requirement of a pH adjustment, selected at 4.5. Interference of several compounds (sodium, potassium, magnesium, barium, glucose, fructose, and sucrose) was estimated by potentiometric selectivity coefficients and considered negligible. Analysis of real samples were performed and considered accurate, with a relative error to an independent method of +2.7%.
Resumo:
Ascorbic acid is found in many food samples. Its clinical and technological importance demands an easyto- use, rapid, robust and inexpensive method of analysis. For this purpose, this work proposes a new flow procedure based on the oxidation of ascorbic acid by periodate. A new potentiometric periodate sensor was constructed to monitor this reaction. The selective membranes were of PVC with porphyrin-based sensing systems and a lipophilic cation as additive. The sensor displayed a near-Nernstian response for periodate over 1.0x10-2–6.0x10-6 M, with an anionic slope of 73.9 ± 0.9 mV decade-1. It was pH independent in acidic media and presented good selectivity features towards several inorganic anions. The flow set-up operated in double-channel, carrying a 5.0x10-4 M IO- 4 solution and a suitable buffer; these were mixed in a 50-cm reaction coil. The overall flow rate was 7 ml min-1 and the injection volume 70 µl. Under these conditions, a linear behaviour against concentration was observed for 17.7–194.0 µg ml-1, presenting slopes of 0.169 mV (mg/l)-1, a reproducibility of ±1.1 mV (n = 5), and a sampling rate of ~96 samples h-1. The proposed method was applied to the analysis of beverages and pharmaceuticals.
Resumo:
In the business world, there are issues such as globalisation, environmental awareness, and the rising expectations of public opinion which have a specific role in what is required from companies as providers of information to the market. This chapter refers to the current state of corporate reporting (financial reporting and sustainability reporting) and demonstrates the need for evolution to a more integrated method of reporting which meets the stakeholders’ needs. This research offers a reflection on how this development can be achieved, which notes the ongoing efforts by international organisations in implementing the diffusion and adoption, as well as looking at the characteristics which are needed for this type of reporting. It also makes the link between an actual case of a company that is one of the world references in sustainable development and integrated reporting. Whether or not the integrated reporting is the natural evolution of the history of financial and sustainability reporting, it still cannot yet claim to be infallible. However, it may definitely be concluded that a new approach is necessary to meet the needs which are continuously developing for a network of stakeholders.