8 resultados para process data

em Dalarna University College Electronic Archive


Relevância:

70.00% 70.00%

Publicador:

Resumo:

BACKGROUND: Annually, 2.8 million neonatal deaths occur worldwide, despite the fact that three-quarters of them could be prevented if available evidence-based interventions were used. Facilitation of community groups has been recognized as a promising method to translate knowledge into practice. In northern Vietnam, the Neonatal Health - Knowledge Into Practice trial evaluated facilitation of community groups (2008-2011) and succeeded in reducing the neonatal mortality rate (adjusted odds ratio, 0.51; 95 % confidence interval 0.30-0.89). The aim of this paper is to report on the process (implementation and mechanism of impact) of this intervention. METHODS: Process data were excerpted from diary information from meetings with facilitators and intervention groups, and from supervisor records of monthly meetings with facilitators. Data were analyzed using descriptive statistics. An evaluation including attributes and skills of facilitators (e.g., group management, communication, and commitment) was performed at the end of the intervention using a six-item instrument. Odds ratios were analyzed, adjusted for cluster randomization using general linear mixed models. RESULTS: To ensure eight active facilitators over 3 years, 11 Women's Union representatives were recruited and trained. Of the 44 intervention groups, composed of health staff and commune stakeholders, 43 completed their activities until the end of the study. In total, 95 % (n = 1508) of the intended monthly meetings with an intervention group and a facilitator were conducted. The overall attendance of intervention group members was 86 %. The groups identified 32 unique problems and implemented 39 unique actions. The identified problems targeted health issues concerning both women and neonates. Actions implemented were mainly communication activities. Communes supported by a group with a facilitator who was rated high on attributes and skills (n = 27) had lower odds of neonatal mortality (odds ratio, 0.37; 95 % confidence interval, 0.19-0.73) than control communes (n = 46). CONCLUSIONS: This evaluation identified several factors that might have influenced the outcomes of the trial: continuity of intervention groups' work, adequate attributes and skills of facilitators, and targeting problems along a continuum of care. Such factors are important to consider in scaling-up efforts.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Internet of Things är ett samlingsbegrepp för den utveckling som innebär att olika typer av enheter kan förses med sensorer och datachip som är uppkopplade mot internet. En ökad mängd data innebär en ökad förfrågan på lösningar som kan lagra, spåra, analysera och bearbeta data. Ett sätt att möta denna förfrågan är att använda sig av molnbaserade realtidsanalystjänster. Multi-tenant och single-tenant är två typer av arkitekturer för molnbaserade realtidsanalystjänster som kan användas för att lösa problemen med hanteringen av de ökade datamängderna. Dessa arkitekturer skiljer sig åt när det gäller komplexitet i utvecklingen. I detta arbete representerar Azure Stream Analytics en multi-tenant arkitektur och HDInsight/Storm representerar en single-tenant arkitektur. För att kunna göra en jämförelse av molnbaserade realtidsanalystjänster med olika arkitekturer, har vi valt att använda oss av användbarhetskriterierna: effektivitet, ändamålsenlighet och användarnöjdhet. Vi kom fram till att vi ville ha svar på följande frågor relaterade till ovannämnda tre användbarhetskriterier: • Vilka likheter och skillnader kan vi se i utvecklingstider? • Kan vi identifiera skillnader i funktionalitet? • Hur upplever utvecklare de olika analystjänsterna? Vi har använt en design and creation strategi för att utveckla två Proof of Concept prototyper och samlat in data genom att använda flera datainsamlingsmetoder. Proof of Concept prototyperna inkluderade två artefakter, en för Azure Stream Analytics och en för HDInsight/Storm. Vi utvärderade dessa genom att utföra fem olika scenarier som var för sig hade 2-5 delmål. Vi simulerade strömmande data genom att låta en applikation kontinuerligt slumpa fram data som vi analyserade med hjälp av de två realtidsanalystjänsterna. Vi har använt oss av observationer för att dokumentera hur vi arbetade med utvecklingen av analystjänsterna samt för att mäta utvecklingstider och identifiera skillnader i funktionalitet. Vi har även använt oss av frågeformulär för att ta reda på vad användare tyckte om analystjänsterna. Vi kom fram till att Azure Stream Analytics initialt var mer användbart än HDInsight/Storm men att skillnaderna minskade efter hand. Azure Stream Analytics var lättare att arbeta med vid simplare analyser medan HDInsight/Storm hade ett bredare val av funktionalitet.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Vegetation growing on railway trackbeds and embankments present potential problems. The presence of vegetation threatens the safety of personnel inspecting the railway infrastructure. In addition vegetation growth clogs the ballast and results in inadequate track drainage which in turn could lead to the collapse of the railway embankment. Assessing vegetation within the realm of railway maintenance is mainly carried out manually by making visual inspections along the track. This is done either on-site or by watching videos recorded by maintenance vehicles mainly operated by the national railway administrative body. A need for the automated detection and characterisation of vegetation on railways (a subset of vegetation control/management) has been identified in collaboration with local railway maintenance subcontractors and Trafikverket, the Swedish Transport Administration (STA). The latter is responsible for long-term planning of the transport system for all types of traffic, as well as for the building, operation and maintenance of public roads and railways. The purpose of this research project was to investigate how vegetation can be measured and quantified by human raters and how machine vision can automate the same process. Data were acquired at railway trackbeds and embankments during field measurement experiments. All field data (such as images) in this thesis work was acquired on operational, lightly trafficked railway tracks, mostly trafficked by goods trains. Data were also generated by letting (human) raters conduct visual estimates of plant cover and/or count the number of plants, either on-site or in-house by making visual estimates of the images acquired from the field experiments. Later, the degree of reliability of(human) raters’ visual estimates were investigated and compared against machine vision algorithms. The overall results of the investigations involving human raters showed inconsistency in their estimates, and are therefore unreliable. As a result of the exploration of machine vision, computational methods and algorithms enabling automatic detection and characterisation of vegetation along railways were developed. The results achieved in the current work have shown that the use of image data for detecting vegetation is indeed possible and that such results could form the base for decisions regarding vegetation control. The performance of the machine vision algorithm which quantifies the vegetation cover was able to process 98% of the im-age data. Investigations of classifying plants from images were conducted in in order to recognise the specie. The classification rate accuracy was 95%.Objective measurements such as the ones proposed in thesis offers easy access to the measurements to all the involved parties and makes the subcontracting process easier i.e., both the subcontractors and the national railway administration are given the same reference framework concerning vegetation before signing a contract, which can then be crosschecked post maintenance.A very important issue which comes with an increasing ability to recognise species is the maintenance of biological diversity. Biological diversity along the trackbeds and embankments can be mapped, and maintained, through better and robust monitoring procedures. Continuously monitoring the state of vegetation along railways is highly recommended in order to identify a need for maintenance actions, and in addition to keep track of biodiversity. The computational methods or algorithms developed form the foundation of an automatic inspection system capable of objectively supporting manual inspections, or replacing manual inspections.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Continuous casting is a casting process that produces steel slabs in a continuous manner with steel being poured at the top of the caster and a steel strand emerging from the mould below. Molten steel is transferred from the AOD converter to the caster using a ladle. The ladle is designed to be strong and insulated. Complete insulation is never achieved. Some of the heat is lost to the refractories by convection and conduction. Heat losses by radiation also occur. It is important to know the temperature of the melt during the process. For this reason, an online model was previously developed to simulate the steel and ladle wall temperatures during the ladle cycle. The model was developed as an ODE based model using grey box modeling technique. The model’s performance was acceptable and needed to be presented in a user friendly way. The aim of this thesis work was basically to design a GUI that presents steel and ladle wall temperatures calculated by the model and also allow the user to make adjustments to the model. This thesis work also discusses the sensitivity analysis of different parameters involved and their effects on different temperature estimations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parkinson's disease (PD) is a degenerative illness whose cardinal symptoms include rigidity, tremor, and slowness of movement. In addition to its widely recognized effects PD can have a profound effect on speech and voice.The speech symptoms most commonly demonstrated by patients with PD are reduced vocal loudness, monopitch, disruptions of voice quality, and abnormally fast rate of speech. This cluster of speech symptoms is often termed Hypokinetic Dysarthria.The disease can be difficult to diagnose accurately, especially in its early stages, due to this reason, automatic techniques based on Artificial Intelligence should increase the diagnosing accuracy and to help the doctors make better decisions. The aim of the thesis work is to predict the PD based on the audio files collected from various patients.Audio files are preprocessed in order to attain the features.The preprocessed data contains 23 attributes and 195 instances. On an average there are six voice recordings per person, By using data compression technique such as Discrete Cosine Transform (DCT) number of instances can be minimized, after data compression, attribute selection is done using several WEKA build in methods such as ChiSquared, GainRatio, Infogain after identifying the important attributes, we evaluate attributes one by one by using stepwise regression.Based on the selected attributes we process in WEKA by using cost sensitive classifier with various algorithms like MultiPass LVQ, Logistic Model Tree(LMT), K-Star.The classified results shows on an average 80%.By using this features 95% approximate classification of PD is acheived.This shows that using the audio dataset, PD could be predicted with a higher level of accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Photovoltaic processing is one of the processes that have significance in semiconductor process line. It is complicated due to the no. of elements involved that directly or indirectly affect the processing and final yield. So mathematically or empirically we can’t say assertively about the results specially related with diffusion, antireflective coating and impurity poisoning. Here I have experimented and collected data on the mono-crystal silicon wafers with varying properties and outputs. Then by using neural network with available experimental data output required can be estimated which is further tested by the test data for authenticity. One can say that it’s a kind of process simulation with varying input of raw wafers to get desired yield of photovoltaic mono-crystal cells.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In a global economy, manufacturers mainly compete with cost efficiency of production, as the price of raw materials are similar worldwide. Heavy industry has two big issues to deal with. On the one hand there is lots of data which needs to be analyzed in an effective manner, and on the other hand making big improvements via investments in cooperate structure or new machinery is neither economically nor physically viable. Machine learning offers a promising way for manufacturers to address both these problems as they are in an excellent position to employ learning techniques with their massive resource of historical production data. However, choosing modelling a strategy in this setting is far from trivial and this is the objective of this article. The article investigates characteristics of the most popular classifiers used in industry today. Support Vector Machines, Multilayer Perceptron, Decision Trees, Random Forests, and the meta-algorithms Bagging and Boosting are mainly investigated in this work. Lessons from real-world implementations of these learners are also provided together with future directions when different learners are expected to perform well. The importance of feature selection and relevant selection methods in an industrial setting are further investigated. Performance metrics have also been discussed for the sake of completion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Delineation of commuting regions has always been based on statistical units, often municipalities or wards. However, using these units has certain disadvantages as their land areas differ considerably. Much information is lost in the larger spatial base units and distortions in self-containment values, the main criterion in rule-based delineation procedures, occur. Alternatively, one can start from relatively small standard size units such as hexagons. In this way, much greater detail in spatial patterns is obtained. In this paper, regions are built by means of intrazonal maximization (Intramax) on the basis of hexagons. The use of geoprocessing tools, specifically developed for the processing ofcommuting data, speeds up processing time considerably. The results of the Intramax analysis are evaluated with travel-to-work area constraints, and comparisons are made with commuting fields, accessibility to employment, commuting flow density and network commuting flow size. From selected steps in the regionalization process, a hierarchy of nested commuting regions emerges, revealing the complexity of commuting patterns.