17 resultados para medically supervised injectable maintenance clinic
em Universidad Politécnica de Madrid
Resumo:
As a summary of past, current, and future trends in software maintenance and reengineering research, we give in this editorial a retrospective look from the past 14 years to now. We provide insight on how software maintenance has evolved and on the most important research topics presented in the series of the European Conference on Software Maintenance and Reengineering.
Resumo:
Replication of software engineering experiments is crucial for dealing with validity threats to experiments in this area. Even though the empirical software engineering community is aware of the importance of replication, the replication rate is still very low. The RESER'11 Joint Replication Project aims to tackle this problem by simultaneously running a series of several replications of the same experiment. In this article, we report the results of the replication run at the Universidad Politécnica de Madrid. Our results are inconsistent with the original experiment. However, we have identified possible causes for them. We also discuss our experiences (in terms of pros and cons) during the replication.
Resumo:
Under the 12th International Conference on Building Materials and Components is inserted this communication related to the field of management of those assets that constitute the Spanish Cultural Heritage and maintenance. This work is related to the field of management of those assets that constitute the Spanish Cultural Heritage which share an artistic or historical background. The conservation and maintenance become a social demand necessary for the preservation of public values, requiring the investment of necessary resources. The legal protection involves a number of obligations and rights to ensure the conservation and heritage protection. The duty of maintenance and upkeep exceeds the useful life the property that must endure more for their cultural value for its usability. The establishment of the necessary conditions to prevent deterioration and precise in order to fulfill its social function, seeking to prolong the life of the asset, preserving their physical integrity and its ability to convey the values protected. This obligation implies a substantial financial effort to the holder of the property, either public or private entity, addressing a problem of economic sustainability. Economic exploitation, with the aim of contributing to their well-maintained, is sometimes the best way to get resources. The work will include different lines of research with the following objectives. - Establishment of processes for assessing total costs over the building life cycle (LCC), during the planning stages or maintenance budgets to determine the most advantageous operating system. - Relationship between the value of property and maintenance costs, and establishing a sensitivity analysis.
Resumo:
Virtual certification partially substitutes by computer simulations the experimental techniques required for rail vehicle certification. In this paper, several works were these techniques were used in the vehicle design and track maintenance processes are presented. Dynamic simulation of multibody systems was used to virtually apply the EN14363 standard to certify the dynamic behaviour of vehicles. The works described are: assessment of a freight bogie design adapted to meter-gauge, assessment of a railway track layout for a subway network, freight bogie design with higher speed and axle load, and processing of the data acquired by a track recording vehicle for track maintenance.
Resumo:
The aim of this research was to implement a methodology through the generation of a supervised classifier based on the Mahalanobis distance to characterize the grapevine canopy and assess leaf area and yield using RGB images. The method automatically processes sets of images, and calculates the areas (number of pixels) corresponding to seven different classes (Grapes, Wood, Background, and four classes of Leaf, of increasing leaf age). Each one is initialized by the user, who selects a set of representative pixels for every class in order to induce the clustering around them. The proposed methodology was evaluated with 70 grapevine (V. vinifera L. cv. Tempranillo) images, acquired in a commercial vineyard located in La Rioja (Spain), after several defoliation and de-fruiting events on 10 vines, with a conventional RGB camera and no artificial illumination. The segmentation results showed a performance of 92% for leaves and 98% for clusters, and allowed to assess the grapevine’s leaf area and yield with R2 values of 0.81 (p < 0.001) and 0.73 (p = 0.002), respectively. This methodology, which operates with a simple image acquisition setup and guarantees the right number and kind of pixel classes, has shown to be suitable and robust enough to provide valuable information for vineyard management.
Resumo:
Machine learning techniques are used for extracting valuable knowledge from data. Nowa¬days, these techniques are becoming even more important due to the evolution in data ac¬quisition and storage, which is leading to data with different characteristics that must be exploited. Therefore, advances in data collection must be accompanied with advances in machine learning techniques to solve new challenges that might arise, on both academic and real applications. There are several machine learning techniques depending on both data characteristics and purpose. Unsupervised classification or clustering is one of the most known techniques when data lack of supervision (unlabeled data) and the aim is to discover data groups (clusters) according to their similarity. On the other hand, supervised classification needs data with supervision (labeled data) and its aim is to make predictions about labels of new data. The presence of data labels is a very important characteristic that guides not only the learning task but also other related tasks such as validation. When only some of the available data are labeled whereas the others remain unlabeled (partially labeled data), neither clustering nor supervised classification can be used. This scenario, which is becoming common nowadays because of labeling process ignorance or cost, is tackled with semi-supervised learning techniques. This thesis focuses on the branch of semi-supervised learning closest to clustering, i.e., to discover clusters using available labels as support to guide and improve the clustering process. Another important data characteristic, different from the presence of data labels, is the relevance or not of data features. Data are characterized by features, but it is possible that not all of them are relevant, or equally relevant, for the learning process. A recent clustering tendency, related to data relevance and called subspace clustering, claims that different clusters might be described by different feature subsets. This differs from traditional solutions to data relevance problem, where a single feature subset (usually the complete set of original features) is found and used to perform the clustering process. The proximity of this work to clustering leads to the first goal of this thesis. As commented above, clustering validation is a difficult task due to the absence of data labels. Although there are many indices that can be used to assess the quality of clustering solutions, these validations depend on clustering algorithms and data characteristics. Hence, in the first goal three known clustering algorithms are used to cluster data with outliers and noise, to critically study how some of the most known validation indices behave. The main goal of this work is however to combine semi-supervised clustering with subspace clustering to obtain clustering solutions that can be correctly validated by using either known indices or expert opinions. Two different algorithms are proposed from different points of view to discover clusters characterized by different subspaces. For the first algorithm, available data labels are used for searching for subspaces firstly, before searching for clusters. This algorithm assigns each instance to only one cluster (hard clustering) and is based on mapping known labels to subspaces using supervised classification techniques. Subspaces are then used to find clusters using traditional clustering techniques. The second algorithm uses available data labels to search for subspaces and clusters at the same time in an iterative process. This algorithm assigns each instance to each cluster based on a membership probability (soft clustering) and is based on integrating known labels and the search for subspaces into a model-based clustering approach. The different proposals are tested using different real and synthetic databases, and comparisons to other methods are also included when appropriate. Finally, as an example of real and current application, different machine learning tech¬niques, including one of the proposals of this work (the most sophisticated one) are applied to a task of one of the most challenging biological problems nowadays, the human brain model¬ing. Specifically, expert neuroscientists do not agree with a neuron classification for the brain cortex, which makes impossible not only any modeling attempt but also the day-to-day work without a common way to name neurons. Therefore, machine learning techniques may help to get an accepted solution to this problem, which can be an important milestone for future research in neuroscience. Resumen Las técnicas de aprendizaje automático se usan para extraer información valiosa de datos. Hoy en día, la importancia de estas técnicas está siendo incluso mayor, debido a que la evolución en la adquisición y almacenamiento de datos está llevando a datos con diferentes características que deben ser explotadas. Por lo tanto, los avances en la recolección de datos deben ir ligados a avances en las técnicas de aprendizaje automático para resolver nuevos retos que pueden aparecer, tanto en aplicaciones académicas como reales. Existen varias técnicas de aprendizaje automático dependiendo de las características de los datos y del propósito. La clasificación no supervisada o clustering es una de las técnicas más conocidas cuando los datos carecen de supervisión (datos sin etiqueta), siendo el objetivo descubrir nuevos grupos (agrupaciones) dependiendo de la similitud de los datos. Por otra parte, la clasificación supervisada necesita datos con supervisión (datos etiquetados) y su objetivo es realizar predicciones sobre las etiquetas de nuevos datos. La presencia de las etiquetas es una característica muy importante que guía no solo el aprendizaje sino también otras tareas relacionadas como la validación. Cuando solo algunos de los datos disponibles están etiquetados, mientras que el resto permanece sin etiqueta (datos parcialmente etiquetados), ni el clustering ni la clasificación supervisada se pueden utilizar. Este escenario, que está llegando a ser común hoy en día debido a la ignorancia o el coste del proceso de etiquetado, es abordado utilizando técnicas de aprendizaje semi-supervisadas. Esta tesis trata la rama del aprendizaje semi-supervisado más cercana al clustering, es decir, descubrir agrupaciones utilizando las etiquetas disponibles como apoyo para guiar y mejorar el proceso de clustering. Otra característica importante de los datos, distinta de la presencia de etiquetas, es la relevancia o no de los atributos de los datos. Los datos se caracterizan por atributos, pero es posible que no todos ellos sean relevantes, o igualmente relevantes, para el proceso de aprendizaje. Una tendencia reciente en clustering, relacionada con la relevancia de los datos y llamada clustering en subespacios, afirma que agrupaciones diferentes pueden estar descritas por subconjuntos de atributos diferentes. Esto difiere de las soluciones tradicionales para el problema de la relevancia de los datos, en las que se busca un único subconjunto de atributos (normalmente el conjunto original de atributos) y se utiliza para realizar el proceso de clustering. La cercanía de este trabajo con el clustering lleva al primer objetivo de la tesis. Como se ha comentado previamente, la validación en clustering es una tarea difícil debido a la ausencia de etiquetas. Aunque existen muchos índices que pueden usarse para evaluar la calidad de las soluciones de clustering, estas validaciones dependen de los algoritmos de clustering utilizados y de las características de los datos. Por lo tanto, en el primer objetivo tres conocidos algoritmos se usan para agrupar datos con valores atípicos y ruido para estudiar de forma crítica cómo se comportan algunos de los índices de validación más conocidos. El objetivo principal de este trabajo sin embargo es combinar clustering semi-supervisado con clustering en subespacios para obtener soluciones de clustering que puedan ser validadas de forma correcta utilizando índices conocidos u opiniones expertas. Se proponen dos algoritmos desde dos puntos de vista diferentes para descubrir agrupaciones caracterizadas por diferentes subespacios. Para el primer algoritmo, las etiquetas disponibles se usan para bus¬car en primer lugar los subespacios antes de buscar las agrupaciones. Este algoritmo asigna cada instancia a un único cluster (hard clustering) y se basa en mapear las etiquetas cono-cidas a subespacios utilizando técnicas de clasificación supervisada. El segundo algoritmo utiliza las etiquetas disponibles para buscar de forma simultánea los subespacios y las agru¬paciones en un proceso iterativo. Este algoritmo asigna cada instancia a cada cluster con una probabilidad de pertenencia (soft clustering) y se basa en integrar las etiquetas conocidas y la búsqueda en subespacios dentro de clustering basado en modelos. Las propuestas son probadas utilizando diferentes bases de datos reales y sintéticas, incluyendo comparaciones con otros métodos cuando resulten apropiadas. Finalmente, a modo de ejemplo de una aplicación real y actual, se aplican diferentes técnicas de aprendizaje automático, incluyendo una de las propuestas de este trabajo (la más sofisticada) a una tarea de uno de los problemas biológicos más desafiantes hoy en día, el modelado del cerebro humano. Específicamente, expertos neurocientíficos no se ponen de acuerdo en una clasificación de neuronas para la corteza cerebral, lo que imposibilita no sólo cualquier intento de modelado sino también el trabajo del día a día al no tener una forma estándar de llamar a las neuronas. Por lo tanto, las técnicas de aprendizaje automático pueden ayudar a conseguir una solución aceptada para este problema, lo cual puede ser un importante hito para investigaciones futuras en neurociencia.
Resumo:
This paper describes an interactive set of tools used to determine the safety of tunnels and to provide data for the decision making of its mainteinance. Although, no doubt, there are still several drawbacks in the difficult procedures in use it is clear that the way is promising and future improvements both in experimental and analytical methods will increase our understanding of this matter.
Resumo:
This paper presents the design and results of the implementation of a model for the evaluation and improvement of maintenance management in industrial SMEs. A thorough review of the state of the art on maintenance management was conducted to determine the model variables; to characterize industrial SMEs, a questionnaire was developed with Likert variables collected in the previous step. Once validated the questionnaire, we applied the same to a group of seventy-five (75) SMEs in the industrial sector, located in Bolivar State, Venezuela. To identify the most relevant variables maintenance management, we used exploratory factor analysis technique applied to the data collected. The score obtained for all the companies evaluated (57% compliance), highlights the weakness of maintenance management in industrial SMEs, particularly in the areas of planning and continuous improvement; most SMEs are evaluated in corrective maintenance stage, and its performance standard only response to the occurrence of faults.
Resumo:
INTRODUCTION: Objective assessment of motor skills has become an important challenge in minimally invasive surgery (MIS) training.Currently, there is no gold standard defining and determining the residents' surgical competence.To aid in the decision process, we analyze the validity of a supervised classifier to determine the degree of MIS competence based on assessment of psychomotor skills METHODOLOGY: The ANFIS is trained to classify performance in a box trainer peg transfer task performed by two groups (expert/non expert). There were 42 participants included in the study: the non-expert group consisted of 16 medical students and 8 residents (< 10 MIS procedures performed), whereas the expert group consisted of 14 residents (> 10 MIS procedures performed) and 4 experienced surgeons. Instrument movements were captured by means of the Endoscopic Video Analysis (EVA) tracking system. Nine motion analysis parameters (MAPs) were analyzed, including time, path length, depth, average speed, average acceleration, economy of area, economy of volume, idle time and motion smoothness. Data reduction was performed by means of principal component analysis, and then used to train the ANFIS net. Performance was measured by leave one out cross validation. RESULTS: The ANFIS presented an accuracy of 80.95%, where 13 experts and 21 non-experts were correctly classified. Total root mean square error was 0.88, while the area under the classifiers' ROC curve (AUC) was measured at 0.81. DISCUSSION: We have shown the usefulness of ANFIS for classification of MIS competence in a simple box trainer exercise. The main advantage of using ANFIS resides in its continuous output, which allows fine discrimination of surgical competence. There are, however, challenges that must be taken into account when considering use of ANFIS (e.g. training time, architecture modeling). Despite this, we have shown discriminative power of ANFIS for a low-difficulty box trainer task, regardless of the individual significances between MAPs. Future studies are required to confirm the findings, inclusion of new tasks, conditions and sample population.
Resumo:
Background Objective assessment of psychomotor skills has become an important challenge in the training of minimally invasive surgical (MIS) techniques. Currently, no gold standard defining surgical competence exists for classifying residents according to their surgical skills. Supervised classification has been proposed as a means for objectively establishing competence thresholds in psychomotor skills evaluation. This report presents a study comparing three classification methods for establishing their validity in a set of tasks for basic skills’ assessment. Methods Linear discriminant analysis (LDA), support vector machines (SVM), and adaptive neuro-fuzzy inference systems (ANFIS) were used. A total of 42 participants, divided into an experienced group (4 expert surgeons and 14 residents with >10 laparoscopic surgeries performed) and a nonexperienced group (16 students and 8 residents with <10 laparoscopic surgeries performed), performed three box trainer tasks validated for assessment of MIS psychomotor skills. Instrument movements were captured using the TrEndo tracking system, and nine motion analysis parameters (MAPs) were analyzed. The performance of the classifiers was measured by leave-one-out cross-validation using the scores obtained by the participants. Results The mean accuracy performances of the classifiers were 71 % (LDA), 78.2 % (SVM), and 71.7 % (ANFIS). No statistically significant differences in the performance were identified between the classifiers. Conclusions The three proposed classifiers showed good performance in the discrimination of skills, especially when information from all MAPs and tasks combined were considered. A correlation between the surgeons’ previous experience and their execution of the tasks could be ascertained from results. However, misclassifications across all the classifiers could imply the existence of other factors influencing psychomotor competence.
Resumo:
Introducción: Diversos cambios ocurren en el sistema cardiovascular materno durante el embarazo, lo que genera un gran estrés sobre este sistema especialmente durante el tercer trimestre, pudiendo acentuarse en presencia de determinados factores de riesgo. Los objetivos de este estudio fueron, valorar las adaptaciones cardiovasculares producidas por un programa específico de ejercicio físico; su seguridad sobre el sistema cardiovascular materno y los resultados del embarazo; y su eficacia en el control de los factores de riesgo cardiovascular. Material y métodos: El diseño del estudio fue un ensayo clínico aleatorizado. 151 gestantes sanas fueron evaluadas mediante un ecocardiograma y un electrocardiograma en la semana 20 y 34 de gestación. Un total de 89 gestantes participaron en un programa de ejercicio físico (GE) desde el primer hasta el tercer trimestre de embarazo, constituido principalmente por 25-30 minutos de trabajo aeróbico (55-60% de la frecuencia cardiaca de reserva), trabajo de fortalecimiento general y específico, y un trabajo de tonificación del suelo pélvico; desarrollado 3 días a la semana con una duración de 55-60 minutos cada sesión. Las gestantes aleatoriamente asignadas al grupo de control (GC; n=62) permanecieron sedentarias durante el embarazo. El estudio fue aprobado por el Comité Ético de investigación clínica del Hospital Universitario de Fuenlabrada. Resultados: Las características basales fueron similares entre ambos grupos. A diferencia del GC, las gestantes del GE evitaron el descenso significativo del gasto cardiaco indexado, entre el 2º y 3ºT de embarazo, y conservaron el patrón geométrico normal del ventrículo izquierdo; mientras que en el GC cambió hacia un patrón de remodelado concéntrico. En la semana 20, las gestantes del GE presentaron valores significativamente menores de frecuencia cardiaca (GC: 79,56±10,76 vs. GE: 76,05±9,34; p=0,04), tensión arterial sistólica (GC: 110,19±10,23 vs. GE: 106,04±12,06; p=0,03); tensión arterial diastólica (GC: 64,56±7,88 vs. GE: 61,81±7,15; p=0,03); tiempo de relajación isovolumétrica (GC: 72,94±14,71 vs. GE: 67,05±16,48; p=0,04); y un mayor tiempo de deceleración de la onda E (GC: 142,09±39,11 vs. GE: 162,10±48,59; p=0,01). En la semana 34, el GE presentó valores significativamente superiores de volumen sistólico (GC: 51,13±11,85 vs. GE: 56,21±12,79 p=0,04), de llenado temprano del ventrículo izquierdo (E) (GC: 78,38±14,07 vs. GE: 85,30±16,62; p=0,02) y de tiempo de deceleración de la onda E (GC: 130,35±37,11 vs. GE: 146,61±43,40; p=0,04). Conclusión: La práctica regular de ejercicio físico durante el embarazo puede producir adaptaciones positivas sobre el sistema cardiovascular materno durante el tercer trimestre de embarazo, además de ayudar en el control de sus factores de riesgo, sin alterar la salud materno-fetal. ABSTRACT Background: Several changes occur in the maternal cardiovascular system during pregnancy. These changes produce a considerable stress in this system, especially during the third trimester, which can be increased in presence of some risk factors. The aims of this study were, to assess the maternal cardiac adaptations in a specific exercise program; its safety on the maternal cardiovascular system and pregnancy outcomes; and its effectiveness in the control of cardiovascular risk factors. Material and methods: A randomized controlled trial was designed. 151 healthy pregnant women were assessed by an echocardiography and electrocardiography at 20 and 34 weeks of gestation. A total of 89 pregnant women participated in a physical exercise program (EG) from the first to the third trimester of pregnancy. It consisted of 25-30 minutes of aerobic conditioning (55-60% of their heart rate reserve), general and specific strength exercises, and a pelvic floor muscles training; 3 times per weeks during 55-60 minutes per session. Pregnant women randomized allocated to the control group (CG) remained sedentary during pregnancy. The study was approved by the Research Ethics Committee of Hospital Universitario de Fuenlabrada. Results: Baseline characteristics were similar between groups. Difference from the CG, pregnant women from the EG prevented the significant decrease of the cardiac output index, between the 2nd and 3rd trimester of pregnancy, and preserved the normal left ventricular pattern; whereas in the CG shifted to concentric remodeling pattern. At 20 weeks, women in the EG had significant lower heart rate (CG: 79,56±10,76 vs. EG: 76,05±9,34; p=0,04), systolic blood pressure (CG: 110,19±10,23 vs. EG: 106,04±12,06; p=0,03); diastolic blood pressure (CG: 64,56±7,88 vs. EG: 61,81±7,15; p=0,03); isovolumetric relaxation time (GC: 72,94±14,71 vs. GE: 67,05±16,48; p=0,04); and a higher deceleration time of E Wave (GC: 142,09±39,11 vs. GE: 162,10±48,59; p=0,01). At 34 weeks, the EG had a significant higher stroke volume (CG: 51,13±11,85 vs. EG: 56,21±12,79 p=0,04), early filling of left ventricular (E) (CG: 78,38±14,07 vs. EG: 85,30±16,62; p=0,02) and deceleration time of E wave (CG: 130,35±37,11 vs. EG:146,61±43,40; p=0,04). Conclusion: Physical regular exercise program during pregnancy may produce positive maternal cardiovascular adaptations during the third trimester of pregnancy. In addition, it helps to control the cardiovascular risk factors without altering maternal and fetus health.
Resumo:
Knowledge acquisition and model maintenance are key problems in knowledge engineering to improve the productivity in the development of intelligent systems. Although historically a number of technical solutions have been proposed in this area, the recent experience shows that there is still an important gap between the way end-users describe their expertise and the way intelligent systems represent knowledge. In this paper we propose an original way to cope with this problem based on electronic documents. We propose the concept of intelligent document processor as a tool that allows the end-user to read/write a document explaining how an intelligent system operates in such a way that, if the user changes the content of the document, the intelligent system will react to these changes. The paper presents the structure of such a document based on knowledge categories derived from the modern knowledge modeling methodologies together with a number of requirements to be understandable by end-users and problem solvers.
Resumo:
This thesis presents a task-oriented approach to telemanipulation for maintenance in large scientific facilities, with specific focus on the particle accelerator facilities at European Organization for Nuclear Research (CERN) in Geneva, Switzerland and GSI Helmholtz Centre for Heavy Ion Research (GSI) in Darmstadt, Germany. It examines how telemanipulation can be used in these facilities and reviews how this differs from the representation of telemanipulation tasks within the literature. It provides methods to assess and compare telemanipulation procedures as well a test suite to compare telemanipulators themselves from a dexterity perspective. It presents a formalisation of telemanipulation procedures into a hierarchical model which can be then used as a basis to aid maintenance engineers in assessing tasks for telemanipulation, and as the basis for future research. The model introduces a new concept of Elemental Actions as the building block of telemanipulation movements and incorporates the dependent factors for procedures at a higher level of abstraction. In order to gain insight into realistic tasks performed by telemanipulation systems within both industrial and research environments a survey of teleoperation experts is presented. Analysis of the responses is performed from which it is concluded that there is a need within the robotics community for physical benchmarking tests which are geared towards evaluating the dexterity of telemanipulators for comparison of their dexterous abilities. A three stage test suite is presented which is designed to allow maintenance engineers to assess different telemanipulators for their dexterity. This incorporates general characteristics of the system, a method to compare kinematic reachability of multiple telemanipulators and physical test setups to assess dexterity from a both a qualitative perspective and measurably by using performance metrics. Finally, experimental results are provided for the application of the proposed test suite onto two telemanipulation systems, one from a research setting and the other within CERN. It describes the procedure performed and discusses comparisons between the two systems, as well as providing input from the expert operator of the CERN system.
Resumo:
The Quality of Life of a person may depend on early attention to his neurodevel-opment disorders in childhood. Identification of language disorders under the age of six years old can speed up required diagnosis and/or treatment processes. This paper details the enhancement of a Clinical Decision Support System (CDSS) aimed to assist pediatricians and language therapists at early identification and re-ferral of language disorders. The system helps to fine tune the Knowledge Base of Language Delays (KBLD) that was already developed and validated in clinical routine with 146 children. Medical experts supported the construction of Gades CDSS by getting scientific consensus from literature and fifteen years of regis-tered use cases of children with language disorders. The current research focuses on an innovative cooperative model that allows the evolution of the KBLD of Gades through the supervised evaluation of the CDSS learnings with experts¿ feedback. The deployment of the resulting system is being assessed under a mul-tidisciplinary team of seven experts from the fields of speech therapist, neonatol-ogy, pediatrics, and neurology.
Resumo:
The calibration results of one anemometer equipped with several rotors, varying their size, were analyzed. In each case, the 30-pulses pert turn output signal of the anemometer was studied using Fourier series decomposition and correlated with the anemometer factor (i.e., the anemometer transfer function). Also, a 3-cup analytical model was correlated to the data resulting from the wind tunnel measurements. Results indicate good correlation between the post-processed output signal and the working condition of the cup anemometer. This correlation was also reflected in the results from the proposed analytical model. With the present work the possibility of remotely checking cup anemometer status, indicating the presence of anomalies and, therefore, a decrease on the wind sensor reliability is revealed.