77 resultados para Breakdowns
Resumo:
Includes bibliography
Resumo:
Includes bibliography
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Pós-graduação em História - FCHS
Resumo:
The objective of this project was to study the epidemiology of bovine tuberculosis in the presence of a wildlife reservoir species. Cross-sectional and longitudinal studies of possum populations with endemic bovine tuberculosis infection were analyzed. The results were used to develop a computer simulation model of the dynamics of bovine tuberculosis infection in possum populations. A case-control study of breakdowns to tuberculosis infection in cattle herds in the Central North Island of New Zealand was conducted to identify risk factors other than exposure to tuberculosis in local possum populations.
Resumo:
OBJECTIVE: To report clinical evaluation of the clamp rod internal fixator 4.5/5.5 (CRIF 4.5/5.5) in bovine long-bone fracture repair. STUDY DESIGN: Retrospective study. ANIMALS: Cattle (n=22) with long-bone fractures. METHODS: Records for cattle with long-bone fractures repaired between 1999 and 2004 with CRIF 4.5/5.5 were reviewed. Quality of fracture repair, fracture healing, and clinical outcome were investigated by means of clinical examination, medical records, radiographs, and telephone questionnaire. RESULTS: Successful long-term outcome was achieved in 18 cattle (82%); 4 were euthanatized 2-14 days postoperatively because of fracture breakdowns. Two cattle had movement of clamps on the rod. Moderate to severe callus formation was evident in 11 cattle 6 months postoperatively. CONCLUSIONS: Movement of clamps on the rod was recognized as implant failure unique to the CRIF. This occurred in cattle with poor fracture stability because of an extensive cortical defect. The CRIF system may not be ideal to treat metacarpal/metatarsal fractures because its voluminous size makes skin closure difficult, thereby increasing the risk of postoperative infections. CLINICAL RELEVANCE: CRIF cannot be recommended for repair of complicated long-bone fractures in cattle.
Resumo:
We derive a new class of iterative schemes for accelerating the convergence of the EM algorithm, by exploiting the connection between fixed point iterations and extrapolation methods. First, we present a general formulation of one-step iterative schemes, which are obtained by cycling with the extrapolation methods. We, then square the one-step schemes to obtain the new class of methods, which we call SQUAREM. Squaring a one-step iterative scheme is simply applying it twice within each cycle of the extrapolation method. Here we focus on the first order or rank-one extrapolation methods for two reasons, (1) simplicity, and (2) computational efficiency. In particular, we study two first order extrapolation methods, the reduced rank extrapolation (RRE1) and minimal polynomial extrapolation (MPE1). The convergence of the new schemes, both one-step and squared, is non-monotonic with respect to the residual norm. The first order one-step and SQUAREM schemes are linearly convergent, like the EM algorithm but they have a faster rate of convergence. We demonstrate, through five different examples, the effectiveness of the first order SQUAREM schemes, SqRRE1 and SqMPE1, in accelerating the EM algorithm. The SQUAREM schemes are also shown to be vastly superior to their one-step counterparts, RRE1 and MPE1, in terms of computational efficiency. The proposed extrapolation schemes can fail due to the numerical problems of stagnation and near breakdown. We have developed a new hybrid iterative scheme that combines the RRE1 and MPE1 schemes in such a manner that it overcomes both stagnation and near breakdown. The squared first order hybrid scheme, SqHyb1, emerges as the iterative scheme of choice based on our numerical experiments. It combines the fast convergence of the SqMPE1, while avoiding near breakdowns, with the stability of SqRRE1, while avoiding stagnations. The SQUAREM methods can be incorporated very easily into an existing EM algorithm. They only require the basic EM step for their implementation and do not require any other auxiliary quantities such as the complete data log likelihood, and its gradient or hessian. They are an attractive option in problems with a very large number of parameters, and in problems where the statistical model is complex, the EM algorithm is slow and each EM step is computationally demanding.
Resumo:
Over the last couple of decades, the UK experienced a substantial increase in the incidence and geographical spread of bovine tuberculosis (TB), in particular since the epidemic of foot-and-mouth disease (FMD) in 2001. The initiation of the Randomized Badger Culling Trial (RBCT) in 1998 in south-west England provided an opportunity for an in-depth collection of questionnaire data (covering farming practices, herd management and husbandry, trading and wildlife activity) from herds having experienced a TB breakdown between 1998 and early 2006 and randomly selected control herds, both within and outside the RBCT (the so-called TB99 and CCS2005 case-control studies). The data collated were split into four separate and comparable substudies related to either the pre-FMD or post-FMD period, which are brought together and discussed here for the first time. The findings suggest that the risk factors associated with TB breakdowns may have changed. Higher Mycobacterium bovis prevalence in badgers following the FMD epidemic may have contributed to the identification of the presence of badgers on a farm as a prominent TB risk factor only post-FMD. The strong emergence of contact/trading TB risk factors post-FMD suggests that the purchasing and movement of cattle, which took place to restock FMD-affected areas after 2001, may have exacerbated the TB problem. Post-FMD analyses also highlighted the potential impact of environmental factors on TB risk. Although no unique and universal solution exists to reduce the transmission of TB to and among British cattle, there is an evidence to suggest that applying the broad principles of biosecurity on farms reduces the risk of infection. However, with trading remaining as an important route of local and long-distance TB transmission, improvements in the detection of infected animals during pre- and post-movement testing should further reduce the geographical spread of the disease.
Resumo:
Similar to other health care processes, referrals are susceptible to breakdowns. These breakdowns in the referral process can lead to poor continuity of care, slow diagnostic processes, delays and repetition of tests, patient and provider dissatisfaction, and can lead to a loss of confidence in providers. These facts and the necessity for a deeper understanding of referrals in healthcare served as the motivation to conduct a comprehensive study of referrals. The research began with the real problem and need to understand referral communication as a mean to improve patient care. Despite previous efforts to explain referrals and the dynamics and interrelations of the variables that influence referrals there is not a common, contemporary, and accepted definition of what a referral is in the health care context. The research agenda was guided by the need to explore referrals as an abstract concept by: 1) developing a conceptual definition of referrals, and 2) developing a model of referrals, to finally propose a 3) comprehensive research framework. This dissertation has resulted in a standard conceptual definition of referrals and a model of referrals. In addition a mixed-method framework to evaluate referrals was proposed, and finally a data driven model was developed to predict whether a referral would be approved or denied by a specialty service. The three manuscripts included in this dissertation present the basis for studying and assessing referrals using a common framework that should allow an easier comparative research agenda to improve referrals taking into account the context where referrals occur.
Resumo:
This study presents the first consolidation of palaeoclimate proxy records from multiple archives to develop statistical rainfall reconstructions for southern Africa covering the last two centuries. State-of-the-art ensemble reconstructions reveal multi-decadal rainfall variability in the summer and winter rainfall zones. A decrease in precipitation amount over time is identified in the summer rainfall zone. No significant change in precipitation amount occurred in the winter rainfall zone, but rainfall variability has increased over time. Generally synchronous rainfall fluctuations between the two zones are identified on decadal scales, with common wet (dry) periods reconstructed around 1890 (1930). A strong relationship between seasonal rainfall and sea surface temperatures (SSTs) in the surrounding oceans is confirmed. Coherence among decadal-scale fluctuations of southern African rainfall, regional SST, SSTs in the Pacific Ocean and rainfall in south-eastern Australia suggest SST-rainfall teleconnections across the southern hemisphere. Temporal breakdowns of the SST-rainfall relationship in the southern African regions and the connection between the two rainfall zones are observed, for example during the 1950s. Our results confirm the complex interplay between large-scale teleconnections, regional SSTs and local effects in modulating multi-decadal southern African rainfall variability over long timescales.
Resumo:
El propósito de este proyecto de fin de Grado es el estudio y desarrollo de una aplicación basada en Android que proporcionará soporte y atención a los servicios de transporte público existentes en Cracovia, Polonia. La principal funcionalidad del sistema será consultar la posición de un determinado autobús o tranvía y mostrar su ubicación con exactitud. Para lograr esto, necesitaremos tres fases de desarrollo. En primer lugar, deberemos implementar un sistema que obtenga las coordenadas geográficas de los vehículos de transporte público en cada instante. A continuación, tendremos que registrar todos estos datos y almacenarlos en una base de datos en un servidor web. Por último, desarrollaremos un sistema cliente que realice consultas a tiempo real sobre estos datos almacenados, obteniendo la posición para una línea determinada y mostrando su ubicación con un marcador en el mapa. Para hacer el seguimiento de los vehículos, sería necesario tener acceso a una API pública que nos proporcionase la posición registrada por los GPS que integran cada uno de ellos. Como esta API no existe actualmente para los servicios de autobús, y para los tranvías es de uso meramente privado, desarrollaremos una segunda aplicación en Android que hará las funciones del lado servidor. En ella podremos elegir mediante una simple interfaz el número de línea y un código específico que identificará a cada vehículo en particular (e.g. podemos tener 6 tranvías recorriendo la red al mismo tiempo para la línea 24). Esta aplicación obtendrá las coordenadas geográficas del teléfono móvil, lo cual incluye latitud, longitud y orientación a través del proveedor GPS. De este modo, podremos realizar una simulación de como el sistema funcionará a tiempo real utilizando la aplicación servidora desde dentro de un tranvía o autobús y, al mismo tiempo, utilizando la aplicación cliente haciendo peticiones para mostrar la información de dicho tranvía. El cliente, además, podrá consultar la ruta de cualquier línea sin necesidad de tener acceso a Internet. Almacenaremos las rutas y paradas de cada línea en la memoria del teléfono móvil utilizando ficheros XML debido al poco espacio que ocupan y a lo útil que resulta poder consultar un trayecto en cualquier momento, independientemente del acceso a la red. El usuario también podrá consultar las tablas de horarios oficiales para cada línea. Aunque en este caso si será necesaria una conexión a Internet debido a que se realizará a través de la web oficial de MPK. Para almacenar todas las coordenadas de cada vehículo en cada instante necesitaremos crear una base de datos en un servidor. Esto se resolverá mediante el uso de MYSQL y PHP. Se enviarán peticiones de tipo GET y POST a los servicios PHP que se encargarán de traducir y realizar la consulta correspondiente a la base de datos MYSQL. Por último, gracias a todos los datos recogidos relativos a la posición de los vehículos de transporte público, podremos realizar algunas tareas de análisis. Comparando la hora exacta a la que los vehículos pasaron por cada parada y la hora a la que deberían haber pasado según los horarios oficiales, podremos descubrir fallos en estos. Seremos capaces de determinar si es un error puntual debido a factores externos (atascos, averías,…) o si por el contrario, es algo que ocurre muy a menudo y se debería corregir el horario oficial. ABSTRACT The aim of this final Project (for University) is to develop an Android application thatwill provide support and feedback to the public transport services in Krakow. The main functionality of the system will be to track the position of a desired bus or tram line, and display its position on the map. To achieve this, we will need 3 stages: the first one will be to implement a system that sends the geographical position of the public transport vehicles, the second one will be to collect this data in a web server, and the last one will be to get the last location registered for the desired line and display it on the map. For tracking the vehicles, we would need to have access to a public API that should be connected with each bus/tram GPS. As this doesn’t exist in Krakow or at least is not available for public use, we will develop a second android application that will do the server side job. We will be able to choose in a simple interface the line number and a code letter to identify each vehicle (e.g. we can have 6 trams that belong to the line number 24 working at the same time). It will take the current mobile geolocation; this includes getting latitude, longitude and bearing from the GPS provider. Thus, we will be able to make a simulation of how the system works in real time by using the server app inside a tram and at the same time, using the client app and making requests to display the information of that tram. The client will also be able to check the path of the desired line without internet access. We will store the path and stops for each line locally in the phone memory using xml files due to the few requirements of available space it needs and the usefulness of checking a path when needed. This app will also offer the functionality of checking the timetable for the line, but in this case, it will link to the official Mpk website, so Internet access will be required. For storing all the coordinates for each vehicle at every moment we will need to create a database on a server. We have decided that the easiest way is to use Mysql and PHP for the deployment of the service. We will send GET and POST requests to the php files and those files will make the according queries to our database. Finally, based on all the collected data, we will be able to get some information about errors in the system of public transport timetables. We will check at what time a line was in each specific stop and compare it with the official timetable to find mistakes of time. We will determine if it is something that happens occasionally and related to external factors (e.g. traffic jams, breakdowns…) or if on the other hand, it is something that happens very often and the public transport timetables should be looked over and corrected.
Resumo:
From research groups at the universities of developed countries there is a growing interest in providing solutions to problems of developing countries. In this context we have studied typical problems in many (educational) institutions, such as the lack of technicians who repair the computers, the administration of the machines, and also the difficulty to maintain and configure the old hardware available due to the variety of characteristics of the different machines and the amount of hardware breakdowns and software issues (viruses, administration issues) that the local staff has to face up to with their equipments. We propose a thin client approach that takes into account the human, hardware and software characteristics of developing institutions to provide a complete service for a computer network. The network administration is reduced to the administration of one server only. The maintenance of the machines is simplified and old computers can simulate the running of a powerful computer. Our proposal results in a cheap, simple (from the support point of view) and powerful (in terms of achieved functionalities) design.
Resumo:
Los sistemas microinformáticos se componen principalmente de hardware y software, con el paso del tiempo el hardware se degrada, se deteriora y en ocasiones se avería. El software evoluciona, requiere un mantenimiento, de actualización y en ocasiones falla teniendo que ser reparado o reinstalado. A nivel hardware se analizan los principales componentes que integran y que son comunes en gran parte estos sistemas, tanto en equipos de sobre mesa como portátiles, independientes del sistema operativo, además de los principales periféricos, también se analizan y recomiendan algunas herramientas necesarias para realizar el montaje, mantenimiento y reparación de estos equipos. Los principales componentes hardware internos son la placa base, memoria RAM, procesador, disco duro, carcasa, fuente de alimentación y tarjeta gráfica. Los periféricos más destacados son el monitor, teclado, ratón, impresora y escáner. Se ha incluido un apartado donde se detallan los distintos tipos de BIOS y los principales parámetros de configuración. Para todos estos componentes, tanto internos como periféricos, se ha realizado un análisis de las características que ofrecen y los detalles en los que se debe prestar especial atención en el momento de seleccionar uno frente a otro. En los casos que existen diferentes tecnologías se ha hecho una comparativa entre ambas, destacando las ventajas y los inconvenientes de unas frente a otras para que sea el usuario final quien decida cual se ajusta mejor a sus necesidades en función de las prestaciones y el coste. Un ejemplo son las impresoras de inyección de tinta frente a las laser o los discos duros mecánicos en comparación con y los discos de estado sólido (SSD). Todos estos componentes están relacionados, interconectados y dependen unos de otros, se ha dedicado un capítulo exclusivamente para estudiar cómo se ensamblan estos componentes, resaltando los principales fallos que se suelen cometer o producir y se han indicado unas serie tareas de mantenimiento preventivo que se pueden realizar para prolongar la vida útil del equipo y evitar averías por mal uso. Los mantenimientos se pueden clasificar como predictivo, perfectivo, adaptativo, preventivo y correctivo. Se ha puesto el foco principalmente en dos tipos de mantenimiento, el preventivo descrito anteriormente y en el correctivo, tanto software como hardware. El mantenimiento correctivo está enfocado al análisis, localización, diagnóstico y reparación de fallos y averías hardware y software. Se describen los principales fallos que se producen en cada componente, cómo se manifiestan o qué síntomas presentan para poder realizar pruebas específicas que diagnostiquen y acoten el fallo. En los casos que es posible la reparación se detallan las instrucciones a seguir, en otro caso se recomienda la sustitución de la pieza o componente. Se ha incluido un apartado dedicado a la virtualización, una tecnología en auge que resulta muy útil para realizar pruebas de software, reduciendo tiempos y costes en las pruebas. Otro aspecto interesante de la virtualización es que se utiliza para montar diferentes servidores virtuales sobre un único servidor físico, lo cual representa un importante ahorro en hardware y costes de mantenimiento, como por ejemplo el consumo eléctrico. A nivel software se realiza un estudio detallado de los principales problemas de seguridad y vulnerabilidades a los que está expuesto un sistema microinformático enumerando y describiendo el comportamiento de los distintos tipos de elementos maliciosos que pueden infectar un equipo, las precauciones que se deben tomar para minimizar los riesgos y las utilidades que se pueden ejecutar para prevenir o limpiar un equipo en caso de infección. Los mantenimientos y asistencias técnicas, en especial las de tipo software, no siempre precisan de la atención presencial de un técnico cualificado, por ello se ha dedicado un capítulo a las herramientas de asistencia remota que se pueden utilizar en este ámbito. Se describen algunas de las más populares y utilizadas en el mercado, su funcionamiento, características y requerimientos. De esta forma el usuario puede ser atendido de una forma rápida, minimizando los tiempos de respuesta y reduciendo los costes. ABSTRACT Microcomputer systems are basically made up of pieces of hardware and software, as time pass, there’s a degradation of the hardware pieces and sometimes failures of them. The software evolves, new versions appears and requires maintenance, upgrades and sometimes also fails having to be repaired or reinstalled. The most important hardware components in a microcomputer system are analyzed in this document for a laptop or a desktop, with independency of the operating system they run. In addition to this, the main peripherals and devices are also analyzed and a recommendation about the most proper tools necessary for maintenance and repair this kind of equipment is given as well. The main internal hardware components are: motherboard, RAM memory, microprocessor, hard drive, housing box, power supply and graphics card. The most important peripherals are: monitor, keyboard, mouse, printer and scanner. A section has been also included where different types of BIOS and main settings are listed with the basic setup parameters in each case. For all these internal components and peripherals, an analysis of their features has been done. Also an indication of the details in which special attention must be payed when choosing more than one at the same time is given. In those cases where different technologies are available, a comparison among them has been done, highlighting the advantages and disadvantages of selecting one or another to guide the end user to decide which one best fits his needs in terms of performance and costs. As an example, the inkjet vs the laser printers technologies has been faced, or also the mechanical hard disks vs the new solid state drives (SSD). All these components are interconnected and are dependent one to each other, a special chapter has been included in order to study how they must be assembled, emphasizing the most often mistakes and faults that can appear during that process, indicating different tasks that can be done as preventive maintenance to enlarge the life of the equipment and to prevent damage because of a wrong use. The different maintenances can be classified as: predictive, perfective, adaptive, preventive and corrective. The main focus is on the preventive maintains, described above, and in the corrective one, in software and hardware. Corrective maintenance is focused on the analysis, localization, diagnosis and repair of hardware and software failures and breakdowns. The most typical failures that can occur are described, also how they can be detected or the specific symptoms of each one in order to apply different technics or specific tests to diagnose and delimit the failure. In those cases where the reparation is possible, instructions to do so are given, otherwise, the replacement of the component is recommended. A complete section about virtualization has also been included. Virtualization is a state of the art technology that is very useful especially for testing software purposes, reducing time and costs during the tests. Another interesting aspect of virtualization is the possibility to have different virtual servers on a single physical server, which represents a significant savings in hardware inversion and maintenance costs, such as electricity consumption. In the software area, a detailed study has been done about security problems and vulnerabilities a microcomputer system is exposed, listing and describing the behavior of different types of malicious elements that can infect a computer, the precautions to be taken to minimize the risks and the tools that can be used to prevent or clean a computer system in case of infection. The software maintenance and technical assistance not always requires the physical presence of a qualified technician to solve the possible problems, that’s why a complete chapter about the remote support tools that can be used to do so has been also included. Some of the most popular ones used in the market are described with their characteristics and requirements. Using this kind of technology, final users can be served quickly, minimizing response times and reducing costs.
Resumo:
Culture consists of shared cognitive representations in the minds of individuals. This paper investigates the extent to which English speakers share the "same" semantic structure of English kinship terms. The semantic structure is defined as the arrangement of the terms relative to each other as represented in a metric space in which items judged more similar are placed closer to each other than items judged as less similar. The cognitive representation of the semantic structure, residing in the mind of an individual, is measured by judged similarity tasks involving comparisons among terms. Using six independent measurements, from each of 122 individuals, correspondence analysis represents the data in a common multidimensional spatial representation. Judged by a variety of statistical procedures, the individuals in our sample share virtually identical cognitive representations of the semantic structure of kinship terms. This model of culture accounts for 70-90% of the total variability in these data. We argue that our findings on kinship should generalize to all semantic domains--e.g., animals, emotions, etc. The investigation of semantic domains is important because they may reside in localized functional units in the brain, because they relate to a variety of cognitive processes, and because they have the potential to provide methods for diagnosing individual breakdowns in the structure of cognitive representations typical of such ailments as Alzheimer disease.
Resumo:
We present a study on the dependence of electric breakdown discharge properties on electrode geometry and the breakdown field in liquid argon near its boiling point. The measurements were performed with a spherical cathode and a planar anode at distances ranging from 0.1 mm to 10.0 mm. A detailed study of the time evolution of the breakdown volt-ampere characteristics was performed for the first time. It revealed a slow streamer development phase in the discharge. The results of a spectroscopic study of the visible light emission of the breakdowns complement the measurements. The light emission from the initial phase of the discharge is attributed to electro-luminescence of liquid argon following a current of drifting electrons. These results contribute to set benchmarks for breakdown-safe design of ionization detectors, such as Liquid Argon Time Projection Chambers (LAr TPC).