942 resultados para One-point Quadrature


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The present paper has two goals. First to present a natural example of a new class of random fields which are the variable neighborhood random fields. The example we consider is a partially observed nearest neighbor binary Markov random field. The second goal is to establish sufficient conditions ensuring that the variable neighborhoods are almost surely finite. We discuss the relationship between the almost sure finiteness of the interaction neighborhoods and the presence/absence of phase transition of the underlying Markov random field. In the case where the underlying random field has no phase transition we show that the finiteness of neighborhoods depends on a specific relation between the noise level and the minimum values of the one-point specification of the Markov random field. The case in which there is phase transition is addressed in the frame of the ferromagnetic Ising model. We prove that the existence of infinite interaction neighborhoods depends on the phase.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Objective: This study aimed to investigate the effect of 830 and 670 nm diode laser on the viability of random skin flaps in rats. Background data: Low-level laser therapy (LLLT) has been reported to be successful in stimulating the formation of new blood vessels and reducing the inflammatory process after injury. However, the efficiency of such treatment remains uncertain, and there is also some controversy regarding the efficacy of different wavelengths currently on the market. Materials and methods: Thirty Wistar rats were used and divided into three groups, with 10 rats in each. A random skin flap was raised on the dorsum of each animal. Group 1 was the control group, group 2 received 830 nm laser radiations, and group 3 was submitted to 670 nm laser radiation (power density = 0.5 mW/cm(2)). The animals underwent laser therapy with 36 J/cm(2) energy density (total energy = 2.52 J and 72 sec per session) immediately after surgery and on the 4 subsequent days. The application site of laser radiation was one point at 2.5 cm from the flap's cranial base. The percentage of skin flap necrosis area was calculated on the 7th postoperative day using the paper template method. A skin sample was collected immediately after to determine the vascular endothelial growth factor (VEGF) expression and the epidermal cell proliferation index (KiD67). Results: Statistically significant differences were found among the percentages of necrosis, with higher values observed in group 1 compared with groups 2 and 3. No statistically significant differences were found among these groups using the paper template method. Group 3 presented the highest mean number of blood vessels expressing VEGF and of cells in the proliferative phase when compared with groups 1 and 2. Conclusions: LLLT was effective in increasing random skin flap viability in rats. The 670 nm laser presented more satisfactory results than the 830 nm laser.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The present study sought to assess nasal respiratory function in adult patients with maxillary constriction who underwent surgically assisted rapid maxillary expansion (SARME) and to determine correlations between orthodontic measurements and changes in nasal area, volume, resistance, and airflow. Twenty-seven patients were assessed by acoustic rhinometry, rhinomanometry, orthodontic measurements, and use of a visual analogue scale at three time points: before surgery; after activation of a preoperatively applied palatal expander; and 4 months post-SARME. Results showed a statistically significant increase (p < 0.001) in all orthodontic measurements. The overall area of the nasal cavity increased after surgery (p < 0.036). The mean volume increased between assessments, but not significantly. Expiratory and inspiratory flow increased over time (p < 0.001). Airway resistance decreased between assessments (p < 0.004). Subjective analysis of the feeling of breathing exclusively through the nose increased significantly from one point in time to the next (p < 0.05). There was a statistical correlation between increased arch perimeter and decreased airway resistance. Respiratory flow was the only variable to behave differently between sides. The authors conclude that the SARME procedure produces major changes in the oral and nasal cavity; when combined, these changes improve patients' quality of breathing.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Máster Universitario en Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In dieser Arbeit wird eine Klasse von stochastischen Prozessen untersucht, die eine abstrakte Verzweigungseigenschaft besitzen. Die betrachteten Prozesse sind homogene Markov-Prozesse in stetiger Zeit mit Zuständen im mehrdimensionalen reellen Raum und dessen Ein-Punkt-Kompaktifizierung. Ausgehend von Minimalforderungen an die zugehörige Übergangsfunktion wird eine vollständige Charakterisierung der endlichdimensionalen Verteilungen mehrdimensionaler kontinuierlicher Verzweigungsprozesse vorgenommen. Mit Hilfe eines erweiterten Laplace-Kalküls wird gezeigt, dass jeder solche Prozess durch eine bestimmte spektral positive unendlich teilbare Verteilung eindeutig bestimmt ist. Umgekehrt wird nachgewiesen, dass zu jeder solchen unendlich teilbaren Verteilung ein zugehöriger Verzweigungsprozess konstruiert werden kann. Mit Hilfe der allgemeinen Theorie Markovscher Operatorhalbgruppen wird sichergestellt, dass jeder mehrdimensionale kontinuierliche Verzweigungsprozess eine Version mit Pfaden im Raum der cadlag-Funktionen besitzt. Ferner kann die (funktionale) schwache Konvergenz der Prozesse auf die vage Konvergenz der zugehörigen Charakterisierungen zurückgeführt werden. Hieraus folgen allgemeine Approximations- und Konvergenzsätze für die betrachtete Klasse von Prozessen. Diese allgemeinen Resultate werden auf die Unterklasse der sich verzweigenden Diffusionen angewendet. Es wird gezeigt, dass für diese Prozesse stets eine Version mit stetigen Pfaden existiert. Schließlich wird die allgemeinste Form der Fellerschen Diffusionsapproximation für mehrtypige Galton-Watson-Prozesse bewiesen.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Im Mittelpunkt dieser Arbeit steht Beweis der Existenz- und Eindeutigkeit von Quadraturformeln, die für das Qualokationsverfahren geeignet sind. Letzteres ist ein von Sloan, Wendland und Chandler entwickeltes Verfahren zur numerischen Behandlung von Randintegralgleichungen auf glatten Kurven (allgemeiner: periodische Pseudodifferentialgleichungen). Es erreicht die gleichen Konvergenzordnungen wie das Petrov-Galerkin-Verfahren, wenn man durch den Operator bestimmte Quadraturformeln verwendet. Zunächst werden die hier behandelten Pseudodifferentialoperatoren und das Qualokationsverfahren vorgestellt. Anschließend wird eine Theorie zur Existenz und Eindeutigkeit von Quadraturformeln entwickelt. Ein wesentliches Hilfsmittel hierzu ist die hier bewiesene Verallgemeinerung eines Satzes von Nürnberger über die Existenz und Eindeutigkeit von Quadraturformeln mit positiven Gewichten, die exakt für Tschebyscheff-Räume sind. Es wird schließlich gezeigt, dass es stets eindeutig bestimmte Quadraturformeln gibt, welche die in den Arbeiten von Sloan und Wendland formulierten Bedingungen erfüllen. Desweiteren werden 2-Punkt-Quadraturformeln für so genannte einfache Operatoren bestimmt, mit welchen das Qualokationsverfahren mit einem Testraum von stückweise konstanten Funktionen eine höhere Konvergenzordnung hat. Außerdem wird gezeigt, dass es für nicht-einfache Operatoren im Allgemeinen keine Quadraturformel gibt, mit der die Konvergenzordnung höher als beim Petrov-Galerkin-Verfahren ist. Das letzte Kapitel beinhaltet schließlich numerische Tests mit Operatoren mit konstanten und variablen Koeffizienten, welche die theoretischen Ergebnisse der vorangehenden Kapitel bestätigen.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The present research aims at shedding light on the demanding puzzle characterizing the issue of child undernutrition in India. Indeed, the so called ‘Indian development paradox’ identifies the phenomenon according to which higher level of income per capita is recorded alongside a lethargic reduction in the proportion of underweight children aged below three years. Thus, in the time period occurring from 2000 to 2005, real Gross Domestic Production per capita has annually grown at 5.4%, whereas the proportion of children who are underweight has declined from 47% to 46%, a mere one point percent. Such trend opens up the space for discussing the traditionally assumed linkage between income-poverty and undernutrition as well as food intervention as the main focus of policies designed to fight child hunger. Also, it unlocks doors for evaluating the role of an alternative economic approach aiming at explaining undernutrition, such as the Capability Approach. The Capability Approach argues for widening the informational basis to account not only for resources, but also for variables related to liberties, opportunities and autonomy in pursuing what individuals value.The econometric analysis highlights the relevance of including behavioral factors when explaining child undernutrition. In particular, the ability of the mother to move freely in the community without the need of asking permission to her husband or mother-in-law is statistically significant when included in the model, which accounts also for confounding traditional variables, such as economic wealth and food security. Also, focusing on agency, results indicates the necessity of measuring autonomy in different domains and the need of improving the measurement scale for agency data, especially with regards the domain of household duties. Finally, future research is required to investigate policy venues for increasing agency in women and in the communities they live in as viable strategy for reducing the plague of child undernutrition in India.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In order to handle Natural disasters, emergency areas are often individuated over the territory, close to populated centres. In these areas, rescue services are located which respond with resources and materials for population relief. A method of automatic positioning of these centres in case of a flood or an earthquake is presented. The positioning procedure consists of two distinct parts developed by the research group of Prof Michael G. H. Bell of Imperial College, London, refined and applied to real cases at the University of Bologna under the coordination of Prof Ezio Todini. There are certain requirements that need to be observed such as the maximum number of rescue points as well as the number of people involved. Initially, the candidate points are decided according to the ones proposed by the local civil protection services. We then calculate all possible routes from each candidate rescue point to all other points, generally using the concept of the "hyperpath", namely a set of paths each one of which may be optimal. The attributes of the road network are of fundamental importance, both for the calculation of the ideal distance and eventual delays due to the event measured in travel time units. In a second phase, the distances are used to decide the optimum rescue point positions using heuristics. This second part functions by "elimination". In the beginning, all points are considered rescue centres. During every interaction we wish to delete one point and calculate the impact it creates. In each case, we delete the point that creates less impact until we reach the number of rescue centres we wish to keep.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This report outlines the development, validity, and reliability of Part A of the OARS Multidimensional Functional Assessment Questionnaire. Part A permits assessment of individuals' functioning on each of five dimensions (social, economic, mental health, physical health and self-care capacity), the detailed information in each area being summarized on a 6-point rating scale by a rater. Content and consensual validity were ensured by the manner of construction. Information on criterion validity was obtained for all dimensions except social. The criterion used and their associated Kendall's Tau values were: an objective economic scale (.62); ratings based on personal interviews by geropsychiatrists (.60); physician's associates (.82); and physical therapists (.89). For 11 geographically dispersed raters from research and clinic settings, intraclass correlational coefficients, based on 30 subjects, ranged from .66 on physical health to .87 in self-care capacity; 74% of the ratings were in complete agreement, 24% differed by one point.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Resistance of trypanosomes to melarsoprol is ascribed to reduced uptake of the drug via the P2 nucleoside transporter. The aim of this study was to look for evidence of drug resistance in Trypanosoma brucei gambiense isolates from sleeping sickness patients in Ibba, South Sudan, an area of high melarsoprol failure rate. Eighteen T. b. gambiense stocks were phenotypically and only 10 strains genotypically characterized. In vitro, all isolates were sensitive to melarsoprol, melarsen oxide, and diminazene. Infected mice were cured with a 4 day treatment of 2.5mg/kg bwt melarsoprol, confirming that the isolates were sensitive. The gene that codes for the P2 transporter, TbATI, was amplified by PCR and sequenced. The sequences were almost identical to the TbAT1(sensitive) reference, except for one point mutation, C1384T resulting in the amino acid change proline-462 to serine. None of the described TbAT1(resistant)-type mutations were detected. In a T. b. gambiense sleeping sickness focus where melarsoprol had to be abandoned due to the high incidence of treatment failures, no evidence for drug resistant trypanosomes or for TbAT1(resistant)-type alleles of the P2 transporter could be found. These findings indicate that factors other than drug resistance contribute to melarsoprol treatment failures.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: The outcome of Kaposi sarcoma varies. While many patients do well on highly active antiretroviral therapy, others have progressive disease and need chemotherapy. In order to predict which patients are at risk of unfavorable evolution, we established a prognostic score. METHOD: The survival analysis (Kaplan-Meier method; Cox proportional hazards models) of 144 patients with Kaposi sarcoma prospectively included in the Swiss HIV Cohort Study, from January 1996 to December 2004, was conducted. OUTCOME ANALYZED: use of chemotherapy or death. VARIABLES ANALYZED: demographics, tumor staging [T0 or T1 (16)], CD4 cell counts and HIV-1 RNA concentration, human herpesvirus 8 (HHV8) DNA in plasma and serological titers to latent and lytic antigens. RESULTS: Of 144 patients, 54 needed chemotherapy or died. In the univariate analysis, tumor stage T1, CD4 cell count below 200 cells/microl, positive HHV8 DNA and absence of antibodies against the HHV8 lytic antigen at the time of diagnosis were significantly associated with a bad outcome.Using multivariate analysis, the following variables were associated with an increased risk of unfavorable outcome: T1 [hazard ratio (HR) 5.22; 95% confidence interval (CI) 2.97-9.18], CD4 cell count below 200 cells/microl (HR 2.33; 95% CI 1.22-4.45) and positive HHV8 DNA (HR 2.14; 95% CI 1.79-2.85).We created a score with these variables ranging from 0 to 4: T1 stage counted for two points, CD4 cell count below 200 cells/microl for one point, and positive HHV8 viral load for one point. Each point increase was associated with a HR of 2.26 (95% CI 1.79-2.85). CONCLUSION: In the multivariate analysis, staging (T1), CD4 cell count (<200 cells/microl), positive HHV8 DNA in plasma, at the time of diagnosis, predict evolution towards death or the need of chemotherapy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE Positron emission tomography (PET)∕computed tomography (CT) measurements on small lesions are impaired by the partial volume effect, which is intrinsically tied to the point spread function of the actual imaging system, including the reconstruction algorithms. The variability resulting from different point spread functions hinders the assessment of quantitative measurements in clinical routine and especially degrades comparability within multicenter trials. To improve quantitative comparability there is a need for methods to match different PET∕CT systems through elimination of this systemic variability. Consequently, a new method was developed and tested that transforms the image of an object as produced by one tomograph to another image of the same object as it would have been seen by a different tomograph. The proposed new method, termed Transconvolution, compensates for differing imaging properties of different tomographs and particularly aims at quantitative comparability of PET∕CT in the context of multicenter trials. METHODS To solve the problem of image normalization, the theory of Transconvolution was mathematically established together with new methods to handle point spread functions of different PET∕CT systems. Knowing the point spread functions of two different imaging systems allows determining a Transconvolution function to convert one image into the other. This function is calculated by convolving one point spread function with the inverse of the other point spread function which, when adhering to certain boundary conditions such as the use of linear acquisition and image reconstruction methods, is a numerically accessible operation. For reliable measurement of such point spread functions characterizing different PET∕CT systems, a dedicated solid-state phantom incorporating (68)Ge∕(68)Ga filled spheres was developed. To iteratively determine and represent such point spread functions, exponential density functions in combination with a Gaussian distribution were introduced. Furthermore, simulation of a virtual PET system provided a standard imaging system with clearly defined properties to which the real PET systems were to be matched. A Hann window served as the modulation transfer function for the virtual PET. The Hann's apodization properties suppressed high spatial frequencies above a certain critical frequency, thereby fulfilling the above-mentioned boundary conditions. The determined point spread functions were subsequently used by the novel Transconvolution algorithm to match different PET∕CT systems onto the virtual PET system. Finally, the theoretically elaborated Transconvolution method was validated transforming phantom images acquired on two different PET systems to nearly identical data sets, as they would be imaged by the virtual PET system. RESULTS The proposed Transconvolution method matched different PET∕CT-systems for an improved and reproducible determination of a normalized activity concentration. The highest difference in measured activity concentration between the two different PET systems of 18.2% was found in spheres of 2 ml volume. Transconvolution reduced this difference down to 1.6%. In addition to reestablishing comparability the new method with its parameterization of point spread functions allowed a full characterization of imaging properties of the examined tomographs. CONCLUSIONS By matching different tomographs to a virtual standardized imaging system, Transconvolution opens a new comprehensive method for cross calibration in quantitative PET imaging. The use of a virtual PET system restores comparability between data sets from different PET systems by exerting a common, reproducible, and defined partial volume effect.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A patient classification system was developed integrating a patient acuity instrument with a computerized nursing distribution method based on a linear programming model. The system was designed for real-time measurement of patient acuity (workload) and allocation of nursing personnel to optimize the utilization of resources.^ The acuity instrument was a prototype tool with eight categories of patients defined by patient severity and nursing intensity parameters. From this tool, the demand for nursing care was defined in patient points with one point equal to one hour of RN time. Validity and reliability of the instrument was determined as follows: (1) Content validity by a panel of expert nurses; (2) predictive validity through a paired t-test analysis of preshift and postshift categorization of patients; (3) initial reliability by a one month pilot of the instrument in a practice setting; and (4) interrater reliability by the Kappa statistic.^ The nursing distribution system was a linear programming model using a branch and bound technique for obtaining integer solutions. The objective function was to minimize the total number of nursing personnel used by optimally assigning the staff to meet the acuity needs of the units. A penalty weight was used as a coefficient of the objective function variables to define priorities for allocation of staff.^ The demand constraints were requirements to meet the total acuity points needed for each unit and to have a minimum number of RNs on each unit. Supply constraints were: (1) total availability of each type of staff and the value of that staff member (value was determined relative to that type of staff's ability to perform the job function of an RN (i.e., value for eight hours RN = 8 points, LVN = 6 points); (2) number of personnel available for floating between units.^ The capability of the model to assign staff quantitatively and qualitatively equal to the manual method was established by a thirty day comparison. Sensitivity testing demonstrated appropriate adjustment of the optimal solution to changes in penalty coefficients in the objective function and to acuity totals in the demand constraints.^ Further investigation of the model documented: correct adjustment of assignments in response to staff value changes; and cost minimization by an addition of a dollar coefficient to the objective function. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In studies assessing outdoor range use of laying hens, the number of hens seen on outdoor ranges is inversely correlated to flock size. The aim of this study was to assess individual ranging behavior on a covered (veranda) and an uncovered outdoor run (free-range) in laying hen flocks varying in size. Five to ten percent of hens (aged 9–15 months) within 4 small (2–2500 hens), 4 medium (5–6000), and 4 large (≥9000) commercial flocks were fitted with radio frequency identification (RFID) tags. Antennas were placed at both sides of all popholes between the house and the veranda and the veranda and the free-range. Ranging behavior was directly monitored for approximately three weeks in combination with hourly photographs of the free-range for the distribution of hens and 6h long video recordings on two parts of the free-range during two days. Between 79 and 99% of the tagged hens were registered on the veranda at least once and between 47 and 90% were registered on the free-range at least once. There was no association between the percentage of hens registered outside the house (veranda or free-range) and flock size. However, individual hens in small and medium sized flocks visited the areas outside the house more frequently and spent more time there than hens from large flocks. Foraging behavior on the free-range was shown more frequently and for a longer duration by hens from small and medium sized flocks than by hens from large flocks. This difference in ranging behavior could account for the negative relationship between flock size and the number of hens seen outside at one point of time. In conclusion, our work describes individual birds’ use of areas outside the house within large scale commercial egg production.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Gaussian random field (GRF) conditional simulation is a key ingredient in many spatial statistics problems for computing Monte-Carlo estimators and quantifying uncertainties on non-linear functionals of GRFs conditional on data. Conditional simulations are known to often be computer intensive, especially when appealing to matrix decomposition approaches with a large number of simulation points. This work studies settings where conditioning observations are assimilated batch sequentially, with one point or a batch of points at each stage. Assuming that conditional simulations have been performed at a previous stage, the goal is to take advantage of already available sample paths and by-products to produce updated conditional simulations at mini- mal cost. Explicit formulae are provided, which allow updating an ensemble of sample paths conditioned on n ≥ 0 observations to an ensemble conditioned on n + q observations, for arbitrary q ≥ 1. Compared to direct approaches, the proposed formulae proveto substantially reduce computational complexity. Moreover, these formulae explicitly exhibit how the q new observations are updating the old sample paths. Detailed complexity calculations highlighting the benefits of this approach with respect to state-of-the-art algorithms are provided and are complemented by numerical experiments.