969 resultados para Computer generated works
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Shows stages and operations undertaken in revising the New Jersey state base map.
Resumo:
Background: Spinal anaesthesia is the standard of care for elective caesarean delivery. It has advantages over general anaesthesia. However the sympathetic blockade induced by spinal anaesthesia results in an 80 percent incidence of hypotension without prophylactic management. Current evidence supports co-loading with intravenous fluids in conjunction with the use of vasopressors as the most effective way to prevent and treat the hypotension. Phenylephrine is the accepted vasopressor of choice in the parturient. A prophylactic phenylephrine infusion combined with a fluid co-load is proven to be an effective and safe method of maintaining maternal hemodynamic stability. While most published studies have assessed the effectiveness of a prophylactic phenylephrine fixed dose infusion, few studies have assessed the effect of a prophylactic phenylephrine weight adjusted dose infusion on maintaining maternal hemodynamic stability following spinal anesthesia for a cesarean delivery. Objective: To compare the incidence of hypotension between women undergoing elective caesarean section under spinal anaesthesia, receiving prophylactic phenylephrine infusion at a fixed dose of 37.5 micrograms per minute versus a weight adjusted dose of 0.5 micrograms per kilogram per minute. Methods: One hundred and eight patients scheduled for non-urgent caesarean section under spinal anaesthesia were randomized into 2 groups; control group and intervention group using a computer generated table of numbers. Control group; Received prophylactic phenylephrine fixed dose infusion at 37.5 micrograms per minute. Intervention group; Received prophylactic phenylephrine weight adjusted dose infusion at 0.5 micrograms per kilogram per minute Results: The two groups had similar baseline characteristics in terms of ; Age, sex, weight and height. There was a 35.2% incidence of hypotension in the fixed dose group and an 18.6% incidence of hypotension in the weight adjusted dose group. This difference was found to be of borderline statistical significance p-value 0.05, and the difference in the incidence rates between the two groups was found to be statistically significant p= 0.03. The difference in the incidence of reactive hypertension and bradycardia between the two groups was not statistically significant: p-value of 0.19 for reactive hypertension and p-value of 0.42 for the incidence of bradycardia. There was also no statistically significant difference in the use of phenylephrine boluses, use of atropine, intravenous fluid used and the number of times the infusion was stopped. Conclusion: Among this population, the incidence of hypotension was significantly less in the weight adjusted dose group than in the fixed dose group. There was no difference in the number of physician interventions required to keep the blood pressure within 20% of baseline, and no difference in the proportion of reactive hypertension or bradycardia between the two groups. Administering prophylactic phenylephrine infusion at a weight adjusted dose of 0.5 micrograms per kilogram per minute results in a lower incidence of hypotension compared to its administration at a fixed dose of 37.5 micrograms per minute.
Resumo:
Shows Paterson city territory, county boundaries, and location of cities and towns in the surrounding region. Does not show roads or streets.
Resumo:
In quantitative risk analysis, the problem of estimating small threshold exceedance probabilities and extreme quantiles arise ubiquitously in bio-surveillance, economics, natural disaster insurance actuary, quality control schemes, etc. A useful way to make an assessment of extreme events is to estimate the probabilities of exceeding large threshold values and extreme quantiles judged by interested authorities. Such information regarding extremes serves as essential guidance to interested authorities in decision making processes. However, in such a context, data are usually skewed in nature, and the rarity of exceedance of large threshold implies large fluctuations in the distribution's upper tail, precisely where the accuracy is desired mostly. Extreme Value Theory (EVT) is a branch of statistics that characterizes the behavior of upper or lower tails of probability distributions. However, existing methods in EVT for the estimation of small threshold exceedance probabilities and extreme quantiles often lead to poor predictive performance in cases where the underlying sample is not large enough or does not contain values in the distribution's tail. In this dissertation, we shall be concerned with an out of sample semiparametric (SP) method for the estimation of small threshold probabilities and extreme quantiles. The proposed SP method for interval estimation calls for the fusion or integration of a given data sample with external computer generated independent samples. Since more data are used, real as well as artificial, under certain conditions the method produces relatively short yet reliable confidence intervals for small exceedance probabilities and extreme quantiles.
Resumo:
Transportation Department, Office of the Assistant Secretary for Systems Development and Technology, Washington, D.C.
Resumo:
Advertising investment and audience figures indicate that television continues to lead as a mass advertising medium. However, its effectiveness is questioned due to problems such as zapping, saturation and audience fragmentation. This has favoured the development of non-conventional advertising formats. This study provides empirical evidence for the theoretical development. This investigation analyzes the recall generated by four non-conventional advertising formats in a real environment: short programme (branded content), television sponsorship, internal and external telepromotion versus the more conventional spot. The methodology employed has integrated secondary data with primary data from computer assisted telephone interviewing (CATI) were performed ad-hoc on a sample of 2000 individuals, aged 16 to 65, representative of the total television audience. Our findings show that non-conventional advertising formats are more effective at a cognitive level, as they generate higher levels of both unaided and aided recall, in all analyzed formats when compared to the spot.
Resumo:
This paper presents an Adaptive Maximum Entropy (AME) approach for modeling biological species. The Maximum Entropy algorithm (MaxEnt) is one of the most used methods in modeling biological species geographical distribution. The approach presented here is an alternative to the classical algorithm. Instead of using the same set features in the training, the AME approach tries to insert or to remove a single feature at each iteration. The aim is to reach the convergence faster without affect the performance of the generated models. The preliminary experiments were well performed. They showed an increasing on performance both in accuracy and in execution time. Comparisons with other algorithms are beyond the scope of this paper. Some important researches are proposed as future works.
Resumo:
Purpose: The aim of this research was to assess the dimensional accuracy of orbital prostheses based on reversed images generated by computer-aided design/computer-assisted manufacturing (CAD/CAM) using computed tomography (CT) scans. Materials and Methods: CT scans of the faces of 15 adults, men and women older than 25 years of age not bearing any congenital or acquired craniofacial defects, were processed using CAD software to produce 30 reversed three-dimensional models of the orbital region. These models were then processed using the CAM system by means of selective laser sintering to generate surface prototypes of the volunteers` orbital regions. Two moulage impressions of the faces of each volunteer were taken to manufacture 15 pairs of casts. Orbital defects were created on the right or left side of each cast. The surface prototypes were adapted to the casts and then flasked to fabricate silicone prostheses. The establishment of anthropometric landmarks on the orbital region and facial midline allowed for the data collection of 31 linear measurements, used to assess the dimensional accuracy of the orbital prostheses and their location on the face. Results: The comparative analyses of the linear measurements taken from the orbital prostheses and the opposite sides that originated the surface prototypes demonstrated that the orbital prostheses presented similar vertical, transversal, and oblique dimensions, as well as similar depth. There was no transverse or oblique displacement of the prostheses. Conclusion: From a clinical perspective, the small differences observed after analyzing all 31 linear measurements did not indicate facial asymmetry. The dimensional accuracy of the orbital prostheses suggested that the CAD/CAM system assessed herein may be applicable for clinical purposes. Int J Prosthodont 2010;23:271-276.
Resumo:
Dynamic and distributed environments are hard to model since they suffer from unexpected changes, incomplete knowledge, and conflicting perspectives and, thus, call for appropriate knowledge representation and reasoning (KRR) systems. Such KRR systems must handle sets of dynamic beliefs, be sensitive to communicated and perceived changes in the environment and, consequently, may have to drop current beliefs in face of new findings or disregard any new data that conflicts with stronger convictions held by the system. Not only do they need to represent and reason with beliefs, but also they must perform belief revision to maintain the overall consistency of the knowledge base. One way of developing such systems is to use reason maintenance systems (RMS). In this paper we provide an overview of the most representative types of RMS, which are also known as truth maintenance systems (TMS), which are computational instances of the foundations-based theory of belief revision. An RMS module works together with a problem solver. The latter feeds the RMS with assumptions (core beliefs) and conclusions (derived beliefs), which are accompanied by their respective foundations. The role of the RMS module is to store the beliefs, associate with each belief (core or derived belief) the corresponding set of supporting foundations and maintain the consistency of the overall reasoning by keeping, for each represented belief, the current supporting justifications. Two major approaches are used to reason maintenance: single-and multiple-context reasoning systems. Although in the single-context systems, each belief is associated to the beliefs that directly generated it—the justification-based TMS (JTMS) or the logic-based TMS (LTMS), in the multiple context counterparts, each belief is associated with the minimal set of assumptions from which it can be inferred—the assumption-based TMS (ATMS) or the multiple belief reasoner (MBR).
Resumo:
Map units directly related to properties of soil-landscape are generated by local soil classes. Therefore to take into consideration the knowledge of farmers is essential to automate the procedure. The aim of this study was to map local soil classes by computer-assisted cartography (CAC), using several combinations of topographic properties produced by GIS (digital elevation model, aspect, slope, and profile curvature). A decision tree was used to find the number of topographic properties required for digital cartography of the local soil classes. The maps produced were evaluated based on the attributes of map quality defined as precision and accuracy of the CAC-based maps. The evaluation was carried out in Central Mexico using three maps of local soil classes with contrasting landscape and climatic conditions (desert, temperate, and tropical). In the three areas the precision (56 %) of the CAC maps based on elevation as topographical feature was higher than when based on slope, aspect and profile curvature. The accuracy of the maps (boundary locations) was however low (33 %), in other words, further research is required to improve this indicator.
Resumo:
Hippocampal adult neurogenesis results in the continuous formation of new neurons in the adult hippocampus, which participate to learning and memory. Manipulations increasing adult neurogenesis have a huge clinical potential in pathologies involving memory loss. Intringuingly, most of the newborn neurons die during their maturation. Thus, increasing newborn neuron survival during their maturation may be a powerful way to increase overall adult neurogenesis. The factors governing this neuronal death are yet poorly known. In my PhD project, we made the hypothesis that synaptogenesis and synaptic activity play a role in the survival of newborn hippocampal neurons. We studied three factors potentially involved in the regulation of the synaptic integration of adult-born neurons. First, we used propofol anesthesia to provoke a global increase in GABAergic activity of the network, and we evaluated the outcome on newborn neuron synaptic integration, morphological development and survival. Propofol anesthesia impaired the dendritic maturation and survival of adult-born neurons in an age-dependent manner. Next, we examined the development of astrocytic ensheathment on the synapses formed by newborn neurons, as we hypothesized that astrocytes are involved in their synaptic integration. Astrocytic processes ensheathed the synapses of newborn neurons very early in their development, and the processes modulated synaptic transmission on these cells. Finally, we studied the cell-autonomous effects of the overexpression of synaptic adhesion molecules on the development, synaptic integration and survival of newborn neurons, and we found that manipulating of a single adhesion molecule was sufficient to modify synaptogenesis and/or synapse function, and to modify newborn neuron survival. Together, these results suggest that the activity of the neuronal network, the modulation of glutamate transport by astrocytes, and the synapse formation and activity of the neuron itself may regulate the survival of newborn neurons. Thus, the survival of newborn neurons may depend on their ability to communicate with the network. This knowledge is crucial for finding ways to increase neurogenesis in patients. More generally, understanding how the neurogenic niche works and which factors are important for the generation, maturation and survival of neurons is fundamental to be able to maybe, one day, replace neurons in any region of the brain.
Resumo:
This paper presents SiMR, a simulator of the Rudimentary Machine designed to be used in a first course of computer architecture of Software Engineering and Computer Engineering programmes. The Rudimentary Machine contains all the basic elements in a RISC computer, and SiMR allows editing, assembling and executing programmes for this processor. SiMR is used at the Universitat Oberta de Catalunya as one of the most important resources in the Virtual Computing Architecture and Organisation Laboratory, since students work at home with the simulator and reports containing their work are automatically generated to be evaluated by lecturers. The results obtained from a survey show that most of the students consider SiMR as a highly necessary or even an indispensable resource to learn the basic concepts about computer architecture.
Resumo:
AbstractObjective:To compare the accuracy of computer-aided ultrasound (US) and magnetic resonance imaging (MRI) by means of hepatorenal gradient analysis in the evaluation of nonalcoholic fatty liver disease (NAFLD) in adolescents.Materials and Methods:This prospective, cross-sectional study evaluated 50 adolescents (aged 11–17 years), including 24 obese and 26 eutrophic individuals. All adolescents underwent computer-aided US, MRI, laboratory tests, and anthropometric evaluation. Sensitivity, specificity, positive and negative predictive values and accuracy were evaluated for both imaging methods, with subsequent generation of the receiver operating characteristic (ROC) curve and calculation of the area under the ROC curve to determine the most appropriate cutoff point for the hepatorenal gradient in order to predict the degree of steatosis, utilizing MRI results as the gold-standard.Results:The obese group included 29.2% girls and 70.8% boys, and the eutrophic group, 69.2% girls and 30.8% boys. The prevalence of NAFLD corresponded to 19.2% for the eutrophic group and 83% for the obese group. The ROC curve generated for the hepatorenal gradient with a cutoff point of 13 presented 100% sensitivity and 100% specificity. As the same cutoff point was considered for the eutrophic group, false-positive results were observed in 9.5% of cases (90.5% specificity) and false-negative results in 0% (100% sensitivity).Conclusion:Computer-aided US with hepatorenal gradient calculation is a simple and noninvasive technique for semiquantitative evaluation of hepatic echogenicity and could be useful in the follow-up of adolescents with NAFLD, population screening for this disease as well as for clinical studies.