901 resultados para Techniques of data analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Representing visually the external appearance of an extinct animal requires, for a reasonably reliable and expressive reconstitution, a good compilation and arrangement of the scientific conclusions on the fossil findings. It is proposed in this work an initial model of a briefing to be applied in a paleodesign process of a paleovertebrate. Briefing can be understood as a gathering of all necessary data to perform a project. We point out what must be known about the relevant structures in order to access all the data and the importance of such information. It is expected that the present briefing suggested might be faced with flexibility, serving as a facilitating interface of the relation between paleoartists and paleontologists.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we focus on the application of two mathematical alternative tasks to the teaching and learning of functions with high school students. The tasks were elaborated according to the following methodological approach: (i) Problem Solving and/or mathematics investigation and (ii) a pedagogical proposal, which defends that mathematical knowledge is developed by means of a balance between logic and intuition. We employed a qualitative research approach (characterized as a case study) aimed at analyzing the didactic pedagogical potential of this type of methodology in high school. We found that tasks such as those presented and discussed in this paper provide a more significant learning for the students, allowing a better conceptual understanding, becoming still more powerful when one considers the social-cultural context of the students.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Defining pharmacokinetic parameters and depletion intervals for antimicrobials used in fish represents important guidelines for future regulation by Brazilian agencies of the use of these substances in fish farming. This article presents a depletion study for oxytetracycline (OTC) in tilapias (Orechromis niloticus) farmed under tropical conditions during the winter season. High performance liquid chromatography, with fluorescence detection for the quantitation of OTC in tilapia fillets and medicated feed, was developed and validated. The depletion study with fish was carried out under monitored environmental conditions. OTC was administered in the feed for five consecutive days at daily dosages of 80 mg/kg body weight. Groups of ten fish were slaughtered at 1, 2, 3, 4, 5, 8, 10, 15, 20, and 25 days after medication. After the 8th day posttreatment, OTC concentrations in the tilapia fillets were below the limit of quantitation (13 ng/g) of the method. Linear regression of the mathematical model of data analysis presented a coefficient of 0.9962. The elimination half- life for OTC in tilapia fillet and the withdrawal period were 1.65 and 6 days, respectively, considering a percentile of 99% with 95% of confidence and a maximum residue limit of 100 ng/g. Even though the study was carried out in the winter under practical conditions where water temperature varied, the results obtained are similar to others from studies conducted under controlled temperature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Atrial fibrillation is a serious public health problem posing a considerable burden to not only patients, but the healthcare environment due to high rates of morbidity, mortality, and medical resource utilization. There are limited data on the variation in treatment practice patterns across different countries, healthcare settings and the associated health outcomes. Methods/design: RHYTHM-AF was a prospective observational multinational study of management of recent onset atrial fibrillation patients considered for cardioversion designed to collect data on international treatment patterns and short term outcomes related to cardioversion. We present data collected in 10 countries between May 2010 and June 2011. Enrollment was ongoing in Italy and Brazil at the time of data analysis. Data were collected at the time of atrial fibrillation episode in all countries (Australia, Brazil, France, Germany, Italy, Netherlands, Poland, Spain, Sweden, United Kingdom), and cumulative follow-up data were collected at day 60 (+/- 10) in all but Spain. Information on center characteristics, enrollment data, patient demographics, detail of atrial fibrillation episode, medical history, diagnostic procedures, acute treatment of atrial fibrillation, discharge information and the follow-up data on major events and rehospitalizations up to day 60 were collected. Discussion: A total of 3940 patients were enrolled from 175 acute care centers. 70.5% of the centers were either academic (44%) or teaching (26%) hospitals with an overall median capacity of 510 beds. The sites were mostly specialized with anticoagulation clinics (65.9%), heart failure (75.1%) and hypertension clinics (60.1%) available. The RHYTHM-AF registry will provide insight into regional variability of antiarrhythmic and antithrombotic treatment of atrial fibrillation, the appropriateness of such treatments with respect to outcomes, and their cost-efficacy. Observations will help inform strategies to improve cardiovascular outcomes in patients with atrial fibrillation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Determination of the utility harmonic impedance based on measurements is a significant task for utility power-quality improvement and management. Compared to those well-established, accurate invasive methods, the noninvasive methods are more desirable since they work with natural variations of the loads connected to the point of common coupling (PCC), so that no intentional disturbance is needed. However, the accuracy of these methods has to be improved. In this context, this paper first points out that the critical problem of the noninvasive methods is how to select the measurements that can be used with confidence for utility harmonic impedance calculation. Then, this paper presents a new measurement technique which is based on the complex data-based least-square regression, combined with two techniques of data selection. Simulation and field test results show that the proposed noninvasive method is practical and robust so that it can be used with confidence to determine the utility harmonic impedances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Trabajo realizado por: Garijo, J. C., Hernández León, S.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[EN]Isogeometric analysis (IGA) has arisen as an attempt to unify the fields of CAD and classical finite element methods. The main idea of IGA consists in using for analysis the same functions (splines) that are used in CAD representation of the geometry. The main advantage with respect to the traditional finite element method is a higher smoothness of the numerical solution and more accurate representation of the geometry. IGA seems to be a promising tool with wide range of applications in engineering. However, this relatively new technique have some open problems that require a solution. In this work we present our results and contributions to this issue…

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Seyfert galaxies are the closest active galactic nuclei. As such, we can use them to test the physical properties of the entire class of objects. To investigate their general properties, I took advantage of different methods of data analysis. In particular I used three different samples of objects, that, despite frequent overlaps, have been chosen to best tackle different topics: the heterogeneous BeppoS AX sample was thought to be optimized to test the average hard X-ray (E above 10 keV) properties of nearby Seyfert galaxies; the X-CfA was thought the be optimized to compare the properties of low-luminosity sources to the ones of higher luminosity and, thus, it was also used to test the emission mechanism models; finally, the XMM–Newton sample was extracted from the X-CfA sample so as to ensure a truly unbiased and well defined sample of objects to define the average properties of Seyfert galaxies. Taking advantage of the broad-band coverage of the BeppoS AX MECS and PDS instruments (between ~2-100 keV), I infer the average X-ray spectral propertiesof nearby Seyfert galaxies and in particular the photon index (~1.8), the high-energy cut-off (~290 keV), and the relative amount of cold reflection (~1.0). Moreover the unified scheme for active galactic nuclei was positively tested. The distribution of isotropic indicators used here (photon index, relative amount of reflection, high-energy cut-off and narrow FeK energy centroid) are similar in type I and type II objects while the absorbing column and the iron line equivalent width significantly differ between the two classes of sources with type II objects displaying larger absorbing columns. Taking advantage of the XMM–Newton and X–CfA samples I also deduced from measurements that 30 to 50% of type II Seyfert galaxies are Compton thick. Confirming previous results, the narrow FeK line is consistent, in Seyfert 2 galaxies, with being produced in the same matter responsible for the observed obscuration. These results support the basic picture of the unified model. Moreover, the presence of a X-ray Baldwin effect in type I sources has been measured using for the first time the 20-100 keV luminosity (EW proportional to L(20-100)^(−0.22±0.05)). This finding suggests that the torus covering factor may be a function of source luminosity, thereby suggesting a refinement of the baseline version of the unifed model itself. Using the BeppoSAX sample, it has been also recorded a possible correlation between the photon index and the amount of cold reflection in both type I and II sources. At a first glance this confirms the thermal Comptonization as the most likely origin of the high energy emission for the active galactic nuclei. This relation, in fact, naturally emerges supposing that the accretion disk penetrates, depending to the accretion rate, the central corona at different depths (Merloni et al. 2006): the higher accreting systems hosting disks down to the last stable orbit while the lower accreting systems hosting truncated disks. On the contrary, the study of the well defined X–C f A sample of Seyfert galaxies has proved that the intrinsic X-ray luminosity of nearby Seyfert galaxies can span values between 10^(38−43) erg s^−1, i.e. covering a huge range of accretion rates. The less efficient systems have been supposed to host ADAF systems without accretion disk. However, the study of the X–CfA sample has also proved the existence of correlations between optical emission lines and X-ray luminosity in the entire range of L_(X) covered by the sample. These relations are similar to the ones obtained if high-L objects are considered. Thus the emission mechanism must be similar in luminous and weak systems. A possible scenario to reconcile these somehow opposite indications is assuming that the ADAF and the two phase mechanism co-exist with different relative importance moving from low-to-high accretion systems (as suggested by the Gamma vs. R relation). The present data require that no abrupt transition between the two regimes is present. As mentioned above, the possible presence of an accretion disk has been tested using samples of nearby Seyfert galaxies. Here, to deeply investigate the flow patterns close to super-massive black-holes, three case study objects for which enough counts statistics is available have been analysed using deep X-ray observations taken with XMM–Newton. The obtained results have shown that the accretion flow can significantly differ between the objects when it is analyzed with the appropriate detail. For instance the accretion disk is well established down to the last stable orbit in a Kerr system for IRAS 13197-1627 where strong light bending effect have been measured. The accretion disk seems to be formed spiraling in the inner ~10-30 gravitational radii in NGC 3783 where time dependent and recursive modulation have been measured both in the continuum emission and in the broad emission line component. Finally, the accretion disk seems to be only weakly detectable in rk 509, with its weak broad emission line component. Finally, blueshifted resonant absorption lines have been detected in all three objects. This seems to demonstrate that, around super-massive black-holes, there is matter which is not confined in the accretion disk and moves along the line of sight with velocities as large as v~0.01-0.4c (whre c is the speed of light). Wether this matter forms winds or blobs is still matter of debate together with the assessment of the real statistical significance of the measured absorption lines. Nonetheless, if confirmed, these phenomena are of outstanding interest because they offer new potential probes for the dynamics of the innermost regions of accretion flows, to tackle the formation of ejecta/jets and to place constraints on the rate of kinetic energy injected by AGNs into the ISM and IGM. Future high energy missions (such as the planned Simbol-X and IXO) will likely allow an exciting step forward in our understanding of the flow dynamics around black holes and the formation of the highest velocity outflows.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this thesis was to describe the development of motion analysis protocols for applications on upper and lower limb extremities, by using inertial sensors-based systems. Inertial sensors-based systems are relatively recent. Knowledge and development of methods and algorithms for the use of such systems for clinical purposes is therefore limited if compared with stereophotogrammetry. However, their advantages in terms of low cost, portability, small size, are a valid reason to follow this direction. When developing motion analysis protocols based on inertial sensors, attention must be given to several aspects, like the accuracy of inertial sensors-based systems and their reliability. The need to develop specific algorithms/methods and software for using these systems for specific applications, is as much important as the development of motion analysis protocols based on them. For this reason, the goal of the 3-years research project described in this thesis was achieved first of all trying to correctly design the protocols based on inertial sensors, in terms of exploring and developing which features were suitable for the specific application of the protocols. The use of optoelectronic systems was necessary because they provided a gold standard and accurate measurement, which was used as a reference for the validation of the protocols based on inertial sensors. The protocols described in this thesis can be particularly helpful for rehabilitation centers in which the high cost of instrumentation or the limited working areas do not allow the use of stereophotogrammetry. Moreover, many applications requiring upper and lower limb motion analysis to be performed outside the laboratories will benefit from these protocols, for example performing gait analysis along the corridors. Out of the buildings, the condition of steady-state walking or the behavior of the prosthetic devices when encountering slopes or obstacles during walking can also be assessed. The application of inertial sensors on lower limb amputees presents conditions which are challenging for magnetometer-based systems, due to ferromagnetic material commonly adopted for the construction of idraulic components or motors. INAIL Prostheses Centre stimulated and, together with Xsens Technologies B.V. supported the development of additional methods for improving the accuracy of MTx in measuring the 3D kinematics for lower limb prostheses, with the results provided in this thesis. In the author’s opinion, this thesis and the motion analysis protocols based on inertial sensors here described, are a demonstration of how a strict collaboration between the industry, the clinical centers, the research laboratories, can improve the knowledge, exchange know-how, with the common goal to develop new application-oriented systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In der vorliegenden Dissertation werden die Kernreaktionen 25Mg(alpha,n)28Si, 26Mg(alpha,n)29Si und 18O(alpha,n)21Ne im astrophysikalisch interessanten Energiebereich von E alpha = 1000 keV bis E alpha = 2450 keV untersucht.rnrnDie Experimente wurden am Nuclear Structure Laboratory der University of Notre Dame (USA) mit dem vor Ort befindlichen Van-de-Graaff Beschleuniger KN durchgeführt. Hierbei wurden Festkörpertargets mit evaporiertem Magnesium oder anodisiertem Sauerstoff mit alpha-Teilchen beschossen und die freigesetzten Neutronen untersucht. Zum Nachweis der freigesetzten Neutronen wurde mit Hilfe von Computersimulationen ein Neutrondetektor basierend auf rn3He-Zählrohren konstruiert. Weiterhin wurden aufgrund des verstärkten Auftretens von Hintergrundreaktionen verschiedene Methoden zur Datenanalyse angewendet.rnrnAbschliessend wird mit Hilfe von Netzwerkrechnungen der Einfluss der Reaktionen 25Mg(alpha,n)28Si, 26Mg(alpha,n)29Si und 18O(alpha,n)21Ne auf die stellare Nukleosynthese untersucht.rn