856 resultados para System analysis - Data processing
Resumo:
Se realizó un estudio descriptivo, retrospectivo; se usó la base de datos de los aislamientos microbiológicos documentados en las UCI de la Fundación Santa fe de Bogotá para el año 2014. La prevalencia de bacterias resistentes en los aislamientos de la FSFB no es baja, por lo que se requiere una terapia empírica acertada acorde con la flora local. Se requieren estudios analíticos para evaluar factores asociados al desarrollo de gérmenes multi resistentes y mortalidad por sepsis
Resumo:
Objetivo: Establecer la correlación entre condiciones de iluminación, ángulo visual, discriminación de contrastes y agudeza visual en la aparición de síntomas visuales en operarios de computador. Materiales y métodos: Estudio de corte transversal y correlación en muestra de 136 trabajadores administrativos de un “call center” perteneciente a una entidad de salud en la ciudad de Bogotá, utilizando un cuestionario con el que se evaluaron las variables sociodemográficas y ocupacionales; aplicando la escala de síntomas visión – computador (CVSS17), realizando evaluación médica y midiendo iluminación y distancia operario pantalla de computador y con los datos recolectados se realizó un análisis estadístico bivariado y se estableció la correlación entre las condiciones de iluminación, ángulo visual, discriminación de contrataste y agudeza visual; frente a la aparición de síntomas visuales asociados con el uso del computador. El análisis se llevó a cabo con medidas de tendencia central y dispersión y con el coeficiente de correlación paramétrico de Pearson o no-paramétrico de Spearman, previamente se evaluó la normalidad con la prueba de Shapiro-Wilk. Las pruebas estadísticas se evaluarán a un nivel de significancia del 5% (p<0.05). Resultados: El promedio de edad de los participantes en el estudio fue de 36,3 años con un rango entre los 22 y 57 años y en donde el género predominante fue el femenino con el 79,4%. Se encontraron síntomas visuales asociados al uso de pantalla de computador del 59,6%, siendo los más frecuentes la epifora (70,6%), fotofobia (67,6%) y ardor ocular (54,4%). Se reportó una correlación inversa significativa entre niveles de iluminación y manifestación de fotofobia (p=0.02; r= 0,262). Por otra parte no se encontró correlación significativa entre los síntomas referidos con ángulo de visión y agudeza visual y discriminación de contrastes. Conclusión: Las condiciones laborales de iluminación del grupo de estudio están relacionadas con la manifestación de fotofobia, Se encontró asociación entre síntomas visuales y variables sociodemográficas, específicamente con el género, fotofobia a pantalla, fatiga visual y fotofobia
Resumo:
La optimización de sistemas y modelos se ha convertido en uno de los factores más importantes a la hora de buscar la mayor eficiencia de un proceso. Este concepto no es ajeno al transporte escolar, ambiente que cambia constantemente al ritmo de las necesidades de sus clientes, y que responde ante una fuerte responsabilidad frente a sus usuarios, los niños que hacen uso del servicio, en cuanto al cumplimiento de tiempos y seguridad, mientras busca constantemente la reducción de costos. Este proyecto expone las problemáticas presentadas en The English School en esta área y propone un modelo de optimización simple que permitirá notables mejoras en términos de tiempos y costos, de tal forma que genere beneficios para la institución en términos financieros y de satisfacción al cliente. Por medio de la implementación de este modelo será posible identificar errores comunes del proceso, se identificarán soluciones prácticas de fácil aplicación en el manejo del transporte y se presentarán los resultados obtenidos en la muestra utilizada para desarrollar el proyecto.
Resumo:
Evert Jan Baerends és catedràtic de la Universitat Lliure d’Amsterdam i de la Universitat de Ciència i Tecnologia de Pohang de Corea del Sud. És un dels impulsors de l’Amsterdam Density Functional Program System, la teoria que ha revolucionat el camp de la química teòrica i computacional moderna. La importància de la densitat electrònica es va fer patent l’any 1964, quan Walter Kohn (Premi Nobel 1998) va demostrar que totes les propietats de les molècules es poden caracteritzar a partir del coneixement de la densitat. Ha assistit al Girona Seminar, convidat per l’Institut de Química Computacional de la UdG
Resumo:
O processo de gestão de risco consiste, no estudo estruturado de todos os aspetos inerentes ao trabalho e é composto pela análise de risco, avaliação de risco e controlo de risco. Na análise de risco, é efetuada a identificação de todos os perigos presentes e a estimação da probabilidade e da gravidade, de acordo com o método de avaliação de risco escolhido. Este estudo centra-se na primeira etapa do processo de avaliação de risco, mais especificamente na análise de risco e nos marcadores de informação necessários para se efetuar a estimação de risco na industria extrativa a céu aberto (atividade de risco elevado). Considerando que o nível de risco obtido, depende fundamentalmente da estimação da probabilidade e da gravidade, ajustada a cada situação de risco, procurou-se identificar os marcadores e compreender a sua influência nos resultados da avaliação de risco (magnitude). O plano de trabalhos de investigação foi sustentado por uma metodologia qualitativa de recolha, registo e análise dos dados. Neste estudo, a recolha de informação foi feita com recurso às seguintes técnicas de investigação: - Observação estruturada e planeada do desmonte da rocha com recurso a explosivos; - Entrevista individual de formadores e gestores de risco (amostragem de casos típicos); Na análise e discussão qualitativa dos dados das entrevistas recorreu-se às seguintes técnicas: - Triangulação de analistas e tratamento de dados cognitiva (técnicas complementares); - Aposição dos marcadores de informação, versus, três métodos de avaliação de risco validados. Os resultados obtidos apontam no sentido das hipóteses de investigação formuladas, ou seja, o tipo de risco influi da seleção da informação e, existem diferenças significativas no nível de risco obtido, quando na estimação da probabilidade e da gravidade são utilizados marcadores de informação distintos.
Resumo:
The motion of a car is described using a stochastic model in which the driving processes are the steering angle and the tangential acceleration. The model incorporates exactly the kinematic constraint that the wheels do not slip sideways. Two filters based on this model have been implemented, namely the standard EKF, and a new filter (the CUF) in which the expectation and the covariance of the system state are propagated accurately. Experiments show that i) the CUF is better than the EKF at predicting future positions of the car; and ii) the filter outputs can be used to control the measurement process, leading to improved ability to recover from errors in predictive tracking.
Resumo:
Pair Programming is a technique from the software development method eXtreme Programming (XP) whereby two programmers work closely together to develop a piece of software. A similar approach has been used to develop a set of Assessment Learning Objects (ALO). Three members of academic staff have developed a set of ALOs for a total of three different modules (two with overlapping content). In each case a pair programming approach was taken to the development of the ALO. In addition to demonstrating the efficiency of this approach in terms of staff time spent developing the ALOs, a statistical analysis of the outcomes for students who made use of the ALOs is used to demonstrate the effectiveness of the ALOs produced via this method.
Resumo:
Model based vision allows use of prior knowledge of the shape and appearance of specific objects to be used in the interpretation of a visual scene; it provides a powerful and natural way to enforce the view consistency constraint. A model based vision system has been developed within ESPRIT VIEWS: P2152 which is able to classify and track moving objects (cars and other vehicles) in complex, cluttered traffic scenes. The fundamental basis of the method has been previously reported. This paper presents recent developments which have extended the scope of the system to include (i) multiple cameras, (ii) variable camera geometry, and (iii) articulated objects. All three enhancements have easily been accommodated within the original model-based approach
Resumo:
This paper reports the current state of work to simplify our previous model-based methods for visual tracking of vehicles for use in a real-time system intended to provide continuous monitoring and classification of traffic from a fixed camera on a busy multi-lane motorway. The main constraints of the system design were: (i) all low level processing to be carried out by low-cost auxiliary hardware, (ii) all 3-D reasoning to be carried out automatically off-line, at set-up time. The system developed uses three main stages: (i) pose and model hypothesis using 1-D templates, (ii) hypothesis tracking, and (iii) hypothesis verification, using 2-D templates. Stages (i) & (iii) have radically different computing performance and computational costs, and need to be carefully balanced for efficiency. Together, they provide an effective way to locate, track and classify vehicles.
Resumo:
The Gauss–Newton algorithm is an iterative method regularly used for solving nonlinear least squares problems. It is particularly well suited to the treatment of very large scale variational data assimilation problems that arise in atmosphere and ocean forecasting. The procedure consists of a sequence of linear least squares approximations to the nonlinear problem, each of which is solved by an “inner” direct or iterative process. In comparison with Newton’s method and its variants, the algorithm is attractive because it does not require the evaluation of second-order derivatives in the Hessian of the objective function. In practice the exact Gauss–Newton method is too expensive to apply operationally in meteorological forecasting, and various approximations are made in order to reduce computational costs and to solve the problems in real time. Here we investigate the effects on the convergence of the Gauss–Newton method of two types of approximation used commonly in data assimilation. First, we examine “truncated” Gauss–Newton methods where the inner linear least squares problem is not solved exactly, and second, we examine “perturbed” Gauss–Newton methods where the true linearized inner problem is approximated by a simplified, or perturbed, linear least squares problem. We give conditions ensuring that the truncated and perturbed Gauss–Newton methods converge and also derive rates of convergence for the iterations. The results are illustrated by a simple numerical example. A practical application to the problem of data assimilation in a typical meteorological system is presented.
Resumo:
Would a research assistant - who can search for ideas related to those you are working on, network with others (but only share the things you have chosen to share), doesn’t need coffee and who might even, one day, appear to be conscious - help you get your work done? Would it help your students learn? There is a body of work showing that digital learning assistants can be a benefit to learners. It has been suggested that adaptive, caring, agents are more beneficial. Would a conscious agent be more caring, more adaptive, and better able to deal with changes in its learning partner’s life? Allow the system to try to dynamically model the user, so that it can make predictions about what is needed next, and how effective a particular intervention will be. Now, given that the system is essentially doing the same things as the user, why don’t we design the system so that it can try to model itself in the same way? This should mimic a primitive self-awareness. People develop their personalities, their identities, through interacting with others. It takes years for a human to develop a full sense of self. Nobody should expect a prototypical conscious computer system to be able to develop any faster than that. How can we provide a computer system with enough social contact to enable it to learn about itself and others? We can make it part of a network. Not just chatting with other computers about computer ‘stuff’, but involved in real human activity. Exposed to ‘raw meaning’ – the developing folksonomies coming out of the learning activities of humans, whether they are traditional students or lifelong learners (a term which should encompass everyone). Humans have complex psyches, comprised of multiple strands of identity which reflect as different roles in the communities of which they are part – so why not design our system the same way? With multiple internal modes of operation, each capable of being reflected onto the outside world in the form of roles – as a mentor, a research assistant, maybe even as a friend. But in order to be able to work with a human for long enough to be able to have a chance of developing the sort of rich behaviours we associate with people, the system needs to be able to function in a practical and helpful role. Unfortunately, it is unlikely to get a free ride from many people (other than its developer!) – so it needs to be able to perform a useful role, and do so securely, respecting the privacy of its partner. Can we create a system which learns to be more human whilst helping people learn?
Resumo:
Different systems, different purposes – but how do they compare as learning environments? We undertook a survey of students at the University, asking whether they learned from their use of the systems, whether they made contact with other students through them, and how often they used them. Although it was a small scale survey, the results are quite enlightening and quite surprising. Blackboard is populated with learning material, has all the students on a module signed up to it, a safe environment (in terms of Acceptable Use and some degree of staff monitoring) and provides privacy within the learning group (plus lecturer and relevant support staff). Facebook, on the other hand, has no learning material, only some of the students using the system, and on the face of it, it has the opportunity for slips in privacy and potential bullying because the Acceptable Use policy is more lax than an institutional one, and breaches must be dealt with on an exception basis, when reported. So why do more students find people on their courses through Facebook than Blackboard? And why are up to 50% of students reporting that they have learned from using Facebook? Interviews indicate that students in subjects which use seminars are using Facebook to facilitate working groups – they can set up private groups which give them privacy to discuss ideas in an environment which perceived as safer than Blackboard can provide. No staff interference, unless they choose to invite them in, and the opportunity to select who in the class can engage. The other striking finding is the difference in use between the genders. Males are using blackboard more frequently than females, whilst the reverse is true for Facebook. Interviews suggest that this may have something to do with needing to access lecture notes… Overall, though, it appears that there is little relationship between the time spent engaging with Blackboard and reports that students have learned from it. Because Blackboard is our central repository for notes, any contact is likely to result in some learning. Facebook, however, shows a clear relationship between frequency of use and perception of learning – and our students post frequently to Facebook. Whilst much of this is probably trivia and social chit chat, the educational elements of it are, de facto, contructivist in nature. Further questions need to be answered - Is the reason the students learn from Facebook because they are creating content which others will see and comment on? Is it because they can engage in a dialogue, without the risk of interruption by others?
Resumo:
Competency management is a very important part of a well-functioning organisation. Unfortunately competency descriptions are not uniformly specified nor defined across borders: National, sectorial or organisational, leading to an opaque competency description market with a multitude of competency frameworks and competency benchmarks. An ontology is a formalised description of a domain, which enables automated reasoning engines to be built which by utilising the interrelations between entities can make “intelligent” choices in different situations within the domain. Introducing formalised competency ontologies automated tools, such as skill gap analysis, training suggestion generation, job search and recruitment, can be developed, which compare and contrast different competency descriptions on the semantic level. The major problem with defining a common formalised ontology for competencies is that there are so many viewpoints of competencies and competency frameworks. Work within the TRACE project has focused on finding common trends within different competency frameworks in order to allow an intermediate competency description to be made, which other frameworks can reference. This research has shown that competencies can be divided up into “knowledge”, “skills” and what we call “others”. An ontology has been created based on this with a simple structure of different “kinds” of “knowledges” and “skills” using semantic interrelations to define the basic semantic structure of the ontology. A prototype tool for analysing a skill gap analysis has been developed. Personal profiles can be produced using the tool and a skill gap analysis is performed on a desired competency profile by using an ontologically based inference engine, which is able to list closest fit and possible proficiency gaps
Resumo:
A new control paradigm for Brain Computer Interfaces (BCIs) is proposed. BCIs provide a means of communication direct from the brain to a computer that allows individuals with motor disabilities an additional channel of communication and control of their external environment. Traditional BCI control paradigms use motor imagery, frequency rhythm modification or the Event Related Potential (ERP) as a means of extracting a control signal. A new control paradigm for BCIs based on speech imagery is initially proposed. Further to this a unique system for identifying correlations between components of the EEG and target events is proposed and introduced.
Resumo:
In domain of intelligent buildings, saving energy in buildings and increasing preferences of occupants are two important factors. These factors are the important keys for evaluating the performance of work environment. In recent years, many researchers combine these areas to create the system that can change from original to the modern work environment called intelligent work environment. Due to advance of agent technology, it has received increasing attention in the area of intelligent pervasive environments. In this paper, we review several issues in intelligent buildings, with respect to the implementation of control system for intelligent buildings via multi-agent systems. Furthermore, we present the MASBO (Multi-Agent System for Building cOntrol) that has been implemented for controlling the building facilities to reach the balancing between energy efficiency and occupant’s comfort. In addition to enhance the MASBO system, the collaboration through negotiation among agents is presented.