954 resultados para lean implementation time
Resumo:
Background: To enhance our understanding of complex biological systems like diseases we need to put all of the available data into context and use this to detect relations, pattern and rules which allow predictive hypotheses to be defined. Life science has become a data rich science with information about the behaviour of millions of entities like genes, chemical compounds, diseases, cell types and organs, which are organised in many different databases and/or spread throughout the literature. Existing knowledge such as genotype - phenotype relations or signal transduction pathways must be semantically integrated and dynamically organised into structured networks that are connected with clinical and experimental data. Different approaches to this challenge exist but so far none has proven entirely satisfactory. Results: To address this challenge we previously developed a generic knowledge management framework, BioXM™, which allows the dynamic, graphic generation of domain specific knowledge representation models based on specific objects and their relations supporting annotations and ontologies. Here we demonstrate the utility of BioXM for knowledge management in systems biology as part of the EU FP6 BioBridge project on translational approaches to chronic diseases. From clinical and experimental data, text-mining results and public databases we generate a chronic obstructive pulmonary disease (COPD) knowledge base and demonstrate its use by mining specific molecular networks together with integrated clinical and experimental data. Conclusions: We generate the first semantically integrated COPD specific public knowledge base and find that for the integration of clinical and experimental data with pre-existing knowledge the configuration based set-up enabled by BioXM reduced implementation time and effort for the knowledge base compared to similar systems implemented as classical software development projects. The knowledgebase enables the retrieval of sub-networks including protein-protein interaction, pathway, gene - disease and gene - compound data which are used for subsequent data analysis, modelling and simulation. Pre-structured queries and reports enhance usability; establishing their use in everyday clinical settings requires further simplification with a browser based interface which is currently under development.
Resumo:
Volumes of data used in science and industry are growing rapidly. When researchers face the challenge of analyzing them, their format is often the first obstacle. Lack of standardized ways of exploring different data layouts requires an effort each time to solve the problem from scratch. Possibility to access data in a rich, uniform manner, e.g. using Structured Query Language (SQL) would offer expressiveness and user-friendliness. Comma-separated values (CSV) are one of the most common data storage formats. Despite its simplicity, with growing file size handling it becomes non-trivial. Importing CSVs into existing databases is time-consuming and troublesome, or even impossible if its horizontal dimension reaches thousands of columns. Most databases are optimized for handling large number of rows rather than columns, therefore, performance for datasets with non-typical layouts is often unacceptable. Other challenges include schema creation, updates and repeated data imports. To address the above-mentioned problems, I present a system for accessing very large CSV-based datasets by means of SQL. It's characterized by: "no copy" approach - data stay mostly in the CSV files; "zero configuration" - no need to specify database schema; written in C++, with boost [1], SQLite [2] and Qt [3], doesn't require installation and has very small size; query rewriting, dynamic creation of indices for appropriate columns and static data retrieval directly from CSV files ensure efficient plan execution; effortless support for millions of columns; due to per-value typing, using mixed text/numbers data is easy; very simple network protocol provides efficient interface for MATLAB and reduces implementation time for other languages. The software is available as freeware along with educational videos on its website [4]. It doesn't need any prerequisites to run, as all of the libraries are included in the distribution package. I test it against existing database solutions using a battery of benchmarks and discuss the results.
Resumo:
Lean-ajattelu, jonka avulla yritys voi saavuttaa merkittäviä etuja vallitsevassa kilpailutilanteessa, on leviämässä autoteollisuudesta muillekin teollisuuden aloille. Lean-ajattelu ja toimintatapa vaatii yritykseltä sitoutumista, rehellisyyttä ja avoimuutta uusille ajatuksille. Työni tavoitteena oli selvittää mihin ja miten yrityksessä tulee varautua, jotta kehitystoimenpiteet voitaisiin viedä läpi onnistuneesti. Tavoitteena on myös vas-tata, mitä Lean toiminta on ja mihin sillä pyritään. Lisäksi, tutkimukseni on tarkoituksena olla niin sanotusti käsikirja muille tehtaille, jos he päätyvät laajentamaan lean-toimintaa omille alueilleen. Tutkimuksesta on tarkoitus löytyä konkreettiset työkalut ja metodit toiminnan kehittämiseen. Työni on case-tyyppinen tutkielma, jossa tutkielman eri vaiheissa lean-työkaluja ja toimintatapoja kokeillaan konkreettisesti. Tutkielmani on vain aloitus vuosia kestävälle hankkeelle, josta voi tulevaisuudessa muodostua vakioitu toimintamalli.
Resumo:
Tutkimuksen tarkoituksena oli löytää menetelmiä tuotteen kokoonpantavuuden kehittämiseen ja parantaa sitä DFMA-ajattelun sekä Lean–filosofian avulla. Työn teoreettisessa osuudessa tarkasteltiin läpimenoajan käsitettä ja käytettävissä olevien resurssien merkitystä. Lisäksi tarkastelun kohteena oli DFMA:n hyödyntäminen kokoonpanon nopeuttamisessa sekä Lean–filosofian hyödyntäminen tuotantoprosessin kehittämiseksi. Empiirinen osuus kohdentui Outotec Turula Oy:n anodivalimon kokoonpanossa ilmenneisiin ongelmiin ja niiden poistamiseen. Tutkimuksella saatiin selvitettyä resurssien ja osaamisen merkitys valmistusprosessin kehittämisen turvaamisessa. DFMA:n ja Lean:in avulla on mahdollista löytää toimintatapoja, joiden avulla anodivalimon kokoonpanon läpimenoaikaa on mahdollista lyhentää.
Resumo:
En este proyecto se determinara cómo se realiza el desarrollo y el posicionamiento de productos mediante la relación estratégica comunitaria y el marketing. A lo largo del tiempo se han venido observando diferentes cambios en el marketing. Anteriormente este era visto como un vínculo entre los procesos sociales y económicos más que como un conjunto de actividades que buscan satisfacer las necesidades del cliente. Por medio del marketing, las organizaciones se enfocaron en la importancia de ofrecer productos altamente competitivos los cuales satisficieran las necesidades de cada uno de los clientes con el fin de generar productos únicos, con una buena posición en el mercado y que no fueran remplazados por su competencia. Es por esto que ofrecían productos con estándares de alta calidad, disminuyendo costos de producción y tiempo para implementarlos en el mercado y satisfacer la demanda de los clientes. Hoy en día, los mercados están saturados es por esto que es de suma importancia que el cliente se encuentre satisfecho con el producto para que vuelva a comprarlo y se cree una relación perdurable y satisfactoria tanto para la compañía como para el cliente. Por este motivo las organizaciones que tengan la capacidad de innovar ya sea mejorando uno ya existente o creando uno totalmente nuevo serán más reconocidas por los consumidores, y así podrán desarrollar productos con precios elevados incrementando su rentabilidad. El marketing relacional tiene una visión un poco más amplia, la cual explica que así como la fidelización de los clientes es importante para una compañía, se debe tener en cuenta a todo el personal que trabaja en esta para mantenerlo satisfecho y crear confianza entre el vendedor y el cliente. En este proyecto utilizamos un enfoque cuantitativo siendo un estudio de tipo teórico-conceptual, seleccionando las bases de datos, las fuentes de información y los documentos más representativos o que proporcionen la máxima información. El proyecto se sitúa dentro del grupo de investigación en perdurabilidad empresarial, en la línea de gerencia, para lograr identificar oportunidades que privilegien a las organizaciones, y en el proyecto de relación de las organizaciones con el medio y marketing.
Resumo:
En esta tesis se presenta una nueva aproximación para la realización de mapas de calidad del aire, con objeto de que esta variable del medio físico pueda ser tenida en cuenta en los procesos de planificación física o territorial. La calidad del aire no se considera normalmente en estos procesos debido a su composición y a la complejidad de su comportamiento, así como a la dificultad de contar con información fiable y contrastada. Además, la variabilidad espacial y temporal de las medidas de calidad del aire hace que sea difícil su consideración territorial y exige la georeferenciación de la información. Ello implica la predicción de medidas para lugares del territorio donde no existen datos. Esta tesis desarrolla un modelo geoestadístico para la predicción de valores de calidad del aire en un territorio. El modelo propuesto se basa en la interpolación de las medidas de concentración de contaminantes registradas en las estaciones de monitorización, mediante kriging ordinario, previa homogeneización de estos datos para eliminar su carácter local. Con el proceso de eliminación del carácter local, desaparecen las tendencias de las series muestrales de datos debidas a las variaciones temporales y espaciales de la calidad del aire. La transformación de los valores de calidad del aire en cantidades independientes del lugar de muestreo, se realiza a través de parámetros de uso del suelo y de otras variables características de la escala local. Como resultado, se obtienen unos datos de entrada espacialmente homogéneos, que es un requisito fundamental para la utilización de cualquier algoritmo de interpolación, en concreto, del kriging ordinario. Después de la interpolación, se aplica una retransformación de los datos para devolver el carácter local al mapa final. Para el desarrollo del modelo, se ha elegido como área de estudio la Comunidad de Madrid, por la disponibilidad de datos reales. Estos datos, valores de calidad del aire y variables territoriales, se utilizan en dos momentos. Un momento inicial, donde se optimiza la selección de los parámetros más adecuados para la eliminación del carácter local de las medidas y se desarrolla cada una de las etapas del modelo. Y un segundo momento, en el que se aplica en su totalidad el modelo desarrollado y se contrasta su eficacia predictiva. El modelo se aplica para la estimación de los valores medios y máximos de NO2 del territorio de estudio. Con la implementación del modelo propuesto se acomete la territorialización de los datos de calidad del aire con la reducción de tres factores clave para su efectiva integración en la planificación territorial o en el proceso de toma de decisiones asociado: incertidumbre, tiempo empleado para generar la predicción y recursos (datos y costes) asociados. El modelo permite obtener una predicción de valores del contaminante objeto de análisis en unas horas, frente a los periodos de modelización o análisis requeridos por otras metodologías. Los recursos necesarios son mínimos, únicamente contar con los datos de las estaciones de monitorización del territorio que, normalmente, están disponibles en las páginas web viii institucionales de los organismos gestores de las redes de medida de la calidad del aire. Por lo que respecta a las incertidumbres de la predicción, puede decirse que los resultados del modelo propuesto en esta tesis son estadísticamente muy correctos y que los errores medios son, en general, similares o menores que los encontrados con la aplicación de las metodologías existentes. ABSTRACT This thesis presents a new approach for mapping air quality, so that this variable of physical environment can be taken into account in physical or territorial planning. Ambient air quality is not normally considered in territorial planning mainly due to the complexity of its composition and behavior and the difficulty of counting with reliable and contrasted information. In addition, the wide spatial and temporal variability of the measurements of air quality makes his territorial consideration difficult and requires georeferenced information. This involves predicting measurements in the places of the territory where there are no data. This thesis develops a geostatistical model for predicting air quality values in a territory. The proposed model is based on the interpolation of measurements of pollutants from the monitoring stations, using ordinary kriging, after a detrending or removal of the local character of sampling values process. With the detrending process, the local character of the time series of sampling data, due to temporal and spatial variations of air quality, is removed. The transformation of the air quality values into site-independent quantities is performed using land use parameters and other characteristic parameters of local scale. This detrending of the monitoring data process results in a spatial homogeneous input set which is a prerequisite for a correct use of any interpolation algorithm, particularly, ordinary kriging. After the interpolation step, a retrending or retransformation is applied in order to incorporate the local character in the final map at places where no monitoring data is available. For the development of this model, the Community of Madrid is chosen as study area, because of the availability of actual data. These data, air quality values and local parameters, are used in two moments. A starting point, to optimize the selection of the most suitable indicators for the detrending process and to develop each one of the model stages. And a second moment, to fully implement the developed model and to evaluate its predictive power. The model is applied to estimate the average and maximum values of NO2 in the study territory. With the implementation of the proposed model, the territorialization of air quality data is undertaken with the reduction in three key factors for the effective integration of this parameter in territorial planning or in the associated decision making process: uncertainty, time taken to generate the prediction and associated resources (data and costs). This model allows the prediction of pollutant values in hours, compared to the implementation time periods required for other modeling or analysis methodologies. The required resources are also minimal, only having data from monitoring stations in the territory, that are normally available on institutional websites of the authorities responsible for air quality networks control and management. With regard to the prediction uncertainties, it can be concluded that the results of the proposed model are statistically very accurate and the mean errors are generally similar to or lower than those found with the application of existing methodologies.
Resumo:
This study tested the utility of a stress and coping model of employee adjustment to a merger Two hundred and twenty employees completed both questionnaires (Time 1: 3 months after merger implementation; Time 2: 2 years later). Structural equation modeling analyses revealed that positive event characteristics predicted greater appraisals of self-efficacy and less stress at Time 1. Self-efficacy, in turn, predicted greater use of problem-focused coping at Time 2, whereas stress predicted a greater use of problem-focused and avoidance coping. Finally, problem-focused coping predicted higher levels of job satisfaction and identification with the merged organization (Time 2), whereas avoidance coping predicted lower identification.
Resumo:
Proceedings IGLC-19, July 2011, Lima, Perú
Resumo:
Through the correct implementation of lean manufacturing methods, a company can greatly improve their business. Over a period of three months at TTM Technologies, I utilized my knowledge to fix existing problems ans streamline production. In addition, other trouble areas in their production process were discovered and proper lean methods were used to address them. TTM Technologies saw many changed in the right direction over this time period.
Resumo:
This study focused on the method known as lean production as a work-related psychosocial risk factor in a Brazilian multinational auto parts company after its merger with other multinational companies. The authors conducted a qualitative analysis of two time points: the first using on-site observation and key interviews with managers and workers during implementation of lean production in 1996; the second, 16 years later, comparing data from a document search in labor inspection records from the Ministry of Labor and Employment and legal proceedings initiated by the Office of the Public Prosecutor for Labor Affairs. The merger led to layoffs, replacements, and an increase in the workday. A class action suit was filed on grounds of aggravated working conditions. The new production model led to psychosocial risks that increased the need for workers' health precautions when changes in the production process introduced new and increased risks of physical and mental illnesses.
Resumo:
Objectives: To analyze mortality rates of children with severe sepsis and septic shock in relation to time-sensitive fluid resuscitation and treatments received and to define barriers to the implementation of the American College of Critical Care Medicine/Pediatric Advanced Life Support guidelines in a pediatric intensive care unit in a developing country. Methods: Retrospective chart review and prospective analysis of septic shock treatment in a pediatric intensive care unit of a tertiary care teaching hospital. Ninety patients with severe sepsis or septic shock admitted between July 2002 and June 2003 were included in this study. Results: Of the 90 patients, 83% had septic shock and 17% had severe sepsis; 80 patients had preexisting severe chronic diseases. Patients with septic shock who received less than a 20-mL/kg dose of resuscitation fluid in the first hour of treatment had a mortality rate of 73%, whereas patients who received more than a 40-mL/kg dose in the first hour of treatment had a mortality rate of 33% (P < 0.05.) Patients treated less than 30 minutes after diagnosis of severe sepsis and septic shock had a significantly lower mortality rate (40%) than patients treated more than 60 Minutes after diagnosis (P < 0.05). Controlling for the risk of mortality, early fluid resuscitation was associated with a 3-fold reduction in the odds of death (odds ratio, 0.33; 95% confidence interval, 0.13-0.85). The most important barriers to achieve adequate severe sepsis and septic shock treatment were lack of adequate vascular access, lack of recognition of early shock, shortage of health care providers, and nonuse of goals and treatment protocols. Conclusions: The mortality rate was higher for children older than years, for those who received less than 40 mL/kg in the first hour, and for those whose treatment was not initiated in the first 30 Minutes after the diagnosis of septic shock. The acknowledgment of existing barriers to a timely fluid administration and the establishment of objectives to overcome these barriers may lead to a more successful implementation of the American College of Critical Care Medicine guidelines and reduced mortality rates for children with septic shock in the developing world.
Resumo:
Este trabalho tem como objectivo apresentar as ferramentas do Lean Thinking e realizar um estudo de caso numa organização em que este sistema é utilizado. Numa primeira fase do trabalho será feito uma análise bibliográfica sobre o ―Lean Thinking”, que consiste num sistema de negócios, uma forma de especificar valor e delinear a melhor sequência de acções que criam valor. Em seguida, será realizado um estudo de caso numa Empresa – Divisão de Motores – no ramo da aeronáutica com uma longa e conceituada tradição com o objectivo de reduzir o TAT (turnaround time – tempo de resposta), ou seja, o tempo desde a entrada de um motor na divisão até à entrega ao cliente. Primeiramente, analisando as falhas existentes em todo o processo do motor, isto é, a análise de tempos de reparação de peças à desmontagem do motor que têm que estar disponíveis à montagem do mesmo, peças que são requisitadas a outros departamentos da Empresa e as mesmas não estão disponíveis quando são precisas passando pelo layout da divisão. Por fim, fazer uma análise dos resultados até então alcançados na divisão de Motores e aplicar as ferramentas do ―Lean Thinking‖ com o objectivo da implementação. É importante referir que a implementação bem-sucedida requer, em primeiro lugar e acima de tudo, um firme compromisso da administração com uma completa adesão à cultura da procura e eliminação de desperdício. Para concluir o trabalho, destaca-se a importância deste sistema e quais são as melhorias que se podem conseguir com a sua implantação.
Resumo:
In this paper we discuss challenges and design principles of an implementation of slot-based tasksplitting algorithms into the Linux 2.6.34 version. We show that this kernel version is provided with the required features for implementing such scheduling algorithms. We show that the real behavior of the scheduling algorithm is very close to the theoretical. We run and discuss experiments on 4-core and 24-core machines.
Resumo:
The IEEE 802.15.4 is the most widespread used protocol for Wireless Sensor Networks (WSNs) and it is being used as a baseline for several higher layer protocols such as ZigBee, 6LoWPAN or WirelessHART. Its MAC (Medium Access Control) supports both contention-free (CFP, based on the reservation of guaranteed time-slots GTS) and contention based (CAP, ruled by CSMA/CA) access, when operating in beacon-enabled mode. Thus, it enables the differentiation between real-time and best-effort traffic. However, some WSN applications and higher layer protocols may strongly benefit from the possibility of supporting more traffic classes. This happens, for instance, for dense WSNs used in time-sensitive industrial applications. In this context, we propose to differentiate traffic classes within the CAP, enabling lower transmission delays and higher success probability to timecritical messages, such as for event detection, GTS reservation and network management. Building upon a previously proposed methodology (TRADIF), in this paper we outline its implementation and experimental validation over a real-time operating system. Importantly, TRADIF is fully backward compatible with the IEEE 802.15.4 standard, enabling to create different traffic classes just by tuning some MAC parameters.
Resumo:
This technical report describes the implementation details of the Time Division Beacon Scheduling Approach in IEEE 802.15.4/ZigBee Cluster-Tree Networks. In this technical report we describe the implementation details, focusing on some aspects of the ZigBee Network Layer and the Time Division Beacon Scheduling mechanism. This report demonstrates the feasibility of our approach based on the evaluation of the experimental results. We also present an overview of the ZigBee address and tree-routing scheme.