958 resultados para data generation


Relevância:

70.00% 70.00%

Publicador:

Resumo:

There is an increasing tendency of turning the current power grid, essentially unaware of variations in electricity demand and scattered energy sources, into something capable of bringing a degree of intelligence by using tools strongly related to information and communication technologies, thus turning into the so-called Smart Grid. In fact, it could be considered that the Smart Grid is an extensive smart system that spreads throughout any area where power is required, providing a significant optimization in energy generation, storage and consumption. However, the information that must be treated to accomplish these tasks is challenging both in terms of complexity (semantic features, distributed systems, suitable hardware) and quantity (consumption data, generation data, forecasting functionalities, service reporting), since the different energy beneficiaries are prone to be heterogeneous, as the nature of their own activities is. This paper presents a proposal on how to deal with these issues by using a semantic middleware architecture that integrates different components focused on specific tasks, and how it is used to handle information at every level and satisfy end user requests.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Красимир Манев, Антон Желязков, Станимир Бойчев - В статията е представена имплементацията на последната фаза на автоматичен генератор на тестови данни за структурно тестване на софтуер, написан на обектно-ориентиран език за програмиране – генерирането на изходен код на тестващия модул. Някои детайли от имплементацията на останалите фази, които са важни за имплементацията на последната фаза, са представени първо. След това е описан и алгоритъмът за генериране на кода на тестващия модул.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This research paper presents the work on feature recognition, tool path data generation and integration with STEP-NC (AP-238 format) for features having Free form / Irregular Contoured Surface(s) (FICS). Initially, the FICS features are modelled / imported in UG CAD package and a closeness index is generated. This is done by comparing the FICS features with basic B-Splines / Bezier curves / surfaces. Then blending functions are caculated by adopting convolution theorem. Based on the blending functions, contour offsett tool paths are generated and simulated for 5 axis milling environment. Finally, the tool path (CL) data is integrated with STEP-NC (AP-238) format. The tool path algorithm and STEP- NC data is tested with various industrial parts through an automated UFUNC plugin.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Data mining, as a heatedly discussed term, has been studied in various fields. Its possibilities in refining the decision-making process, realizing potential patterns and creating valuable knowledge have won attention of scholars and practitioners. However, there are less studies intending to combine data mining and libraries where data generation occurs all the time. Therefore, this thesis plans to fill such a gap. Meanwhile, potential opportunities created by data mining are explored to enhance one of the most important elements of libraries: reference service. In order to thoroughly demonstrate the feasibility and applicability of data mining, literature is reviewed to establish a critical understanding of data mining in libraries and attain the current status of library reference service. The result of the literature review indicates that free online data resources other than data generated on social media are rarely considered to be applied in current library data mining mandates. Therefore, the result of the literature review motivates the presented study to utilize online free resources. Furthermore, the natural match between data mining and libraries is established. The natural match is explained by emphasizing the data richness reality and considering data mining as one kind of knowledge, an easy choice for libraries, and a wise method to overcome reference service challenges. The natural match, especially the aspect that data mining could be helpful for library reference service, lays the main theoretical foundation for the empirical work in this study. Turku Main Library was selected as the case to answer the research question: whether data mining is feasible and applicable for reference service improvement. In this case, the daily visit from 2009 to 2015 in Turku Main Library is considered as the resource for data mining. In addition, corresponding weather conditions are collected from Weather Underground, which is totally free online. Before officially being analyzed, the collected dataset is cleansed and preprocessed in order to ensure the quality of data mining. Multiple regression analysis is employed to mine the final dataset. Hourly visits are the independent variable and weather conditions, Discomfort Index and seven days in a week are dependent variables. In the end, four models in different seasons are established to predict visiting situations in each season. Patterns are realized in different seasons and implications are created based on the discovered patterns. In addition, library-climate points are generated by a clustering method, which simplifies the process for librarians using weather data to forecast library visiting situation. Then the data mining result is interpreted from the perspective of improving reference service. After this data mining work, the result of the case study is presented to librarians so as to collect professional opinions regarding the possibility of employing data mining to improve reference services. In the end, positive opinions are collected, which implies that it is feasible to utilizing data mining as a tool to enhance library reference service.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

OBJECTIVE To estimate rates of non-adherence to telemedicine strategies aimed at treating drug addiction. METHODS A systematic review was conducted of randomized controlled trials investigating different telemedicine treatment methods for drug addiction. The following databases were consulted between May 18, 2012 and June 21, 2012: PubMed, PsycINFO, SciELO, Wiley (The Cochrane Library), Embase, Clinical trials and Google Scholar. The Grading of Recommendations Assessment, Development and Evaluation was used to evaluate the quality of the studies. The criteria evaluated were: appropriate sequence of data generation, allocation concealment, blinding, description of losses and exclusions and analysis by intention to treat. There were 274 studies selected, of which 20 were analyzed. RESULTS Non-adherence rates varied between 15.0% and 70.0%. The interventions evaluated were of at least three months duration and, although they all used telemedicine as support, treatment methods differed. Regarding the quality of the studies, the values also varied from very poor to high quality. High quality studies showed better adherence rates, as did those using more than one technique of intervention and a limited treatment time. Mono-user studies showed better adherence rates than poly-user studies. CONCLUSIONS Rates of non-adherence to treatment involving telemedicine on the part of users of psycho-active substances differed considerably, depending on the country, the intervention method, follow-up time and substances used. Using more than one technique of intervention, short duration of treatment and the type of substance used by patients appear to facilitate adherence.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Interdisciplinary frameworks for studying natural hazards and their temporal trends have an important potential in data generation for risk assessment, land use planning, and therefore the sustainable management of resources. This paper focuses on the adjustments required because of the wide variety of scientific fields involved in the reconstruction and characterisation of flood events for the past 1000 years. The aim of this paper is to describe various methodological aspects of the study of flood events in their historical dimension, including the critical evaluation of old documentary and instrumental sources, flood-event classification and hydraulic modelling, and homogeneity and quality control tests. Standardized criteria for flood classification have been defined and applied to the Isère and Drac floods in France, from 1600 to 1950, and to the Ter, the Llobregat and the Segre floods, in Spain, from 1300 to 1980. The analysis on the Drac and Isère data series from 1600 to the present day showed that extraordinary and catastrophic floods were not distributed uniformly in time. However, the largest floods (general catastrophic floods) were homogeneously distributed in time within the period 1600¿1900. No major flood occurred during the 20th century in these rivers. From 1300 to the present day, no homogeneous behaviour was observed for extraordinary floods in the Spanish rivers. The largest floods were uniformly distributed in time within the period 1300-1900, for the Segre and Ter rivers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The current study proposes a new procedure for separately estimating slope change and level change between two adjacent phases in single-case designs. The procedure eliminates baseline trend from the whole data series prior to assessing treatment effectiveness. The steps necessary to obtain the estimates are presented in detail, explained, and illustrated. A simulation study is carried out to explore the bias and precision of the estimators and compare them to an analytical procedure matching the data simulation model. The experimental conditions include two data generation models, several degrees of serial dependence, trend, level and/or slope change. The results suggest that the level and slope change estimates provided by the procedure are unbiased for all levels of serial dependence tested and trend is effectively controlled for. The efficiency of the slope change estimator is acceptable, whereas the variance of the level change estimator may be problematic for highly negatively autocorrelated data series.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This master’s thesis aims to study and represent from literature how evolutionary algorithms are used to solve different search and optimisation problems in the area of software engineering. Evolutionary algorithms are methods, which imitate the natural evolution process. An artificial evolution process evaluates fitness of each individual, which are solution candidates. The next population of candidate solutions is formed by using the good properties of the current population by applying different mutation and crossover operations. Different kinds of evolutionary algorithm applications related to software engineering were searched in the literature. Applications were classified and represented. Also the necessary basics about evolutionary algorithms were presented. It was concluded, that majority of evolutionary algorithm applications related to software engineering were about software design or testing. For example, there were applications about classifying software production data, project scheduling, static task scheduling related to parallel computing, allocating modules to subsystems, N-version programming, test data generation and generating an integration test order. Many applications were experimental testing rather than ready for real production use. There were also some Computer Aided Software Engineering tools based on evolutionary algorithms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recent advances in Information and Communication Technology (ICT), especially those related to the Internet of Things (IoT), are facilitating smart regions. Among many services that a smart region can offer, remote health monitoring is a typical application of IoT paradigm. It offers the ability to continuously monitor and collect health-related data from a person, and transmit the data to a remote entity (for example, a healthcare service provider) for further processing and knowledge extraction. An IoT-based remote health monitoring system can be beneficial in rural areas belonging to the smart region where people have limited access to regular healthcare services. The same system can be beneficial in urban areas where hospitals can be overcrowded and where it may take substantial time to avail healthcare. However, this system may generate a large amount of data. In order to realize an efficient IoT-based remote health monitoring system, it is imperative to study the network communication needs of such a system; in particular the bandwidth requirements and the volume of generated data. The thesis studies a commercial product for remote health monitoring in Skellefteå, Sweden. Based on the results obtained via the commercial product, the thesis identified the key network-related requirements of a typical remote health monitoring system in terms of real-time event update, bandwidth requirements and data generation. Furthermore, the thesis has proposed an architecture called IReHMo - an IoT-based remote health monitoring architecture. This architecture allows users to incorporate several types of IoT devices to extend the sensing capabilities of the system. Using IReHMo, several IoT communication protocols such as HTTP, MQTT and CoAP has been evaluated and compared against each other. Results showed that CoAP is the most efficient protocol to transmit small size healthcare data to the remote servers. The combination of IReHMo and CoAP significantly reduced the required bandwidth as well as the volume of generated data (up to 56 percent) compared to the commercial product. Finally, the thesis conducted a scalability analysis, to determine the feasibility of deploying the combination of IReHMo and CoAP in large numbers in regions in north Sweden.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This research is a self-study into my life as an athlete, elementary school teacher, leamer, and as a teacher educator/academic. Throughout the inquiry, I explore how my beliefs and values infused my lived experiences and ultimately influenced my constructivist, humanist, and ultimately my holistic teaching and learning practice which at times disrupted the status quo. I have written a collection of narratives (data generation) which embodied my identity as an unintelligent student/leamer, a teacher/learner, an experiential learner, a tenacious participant, and a change agent to name a few. As I unpack my stories and hermeneutically reconstruct their intent, I question their meaning as I explore how I can improve my teaching and learning practice and potentially effect positive change when instructing beginning teacher candidates at a Faculty of Education. At the outset I situate my story and provide the necessary political, social, and cultural background information to ground my research. I follow this with an in depth look at the elements that interconnect the theoretical framework of this self-study by presenting the notion of writing at the boundaries through auto ethnography (Ellis, 2000; Ellis & Bochner, 2004) and writing as a method of inquiry (Richardson, 2000). The emergent themes of experiential learning, identity, and embodied knowing surfaced during the data generation phase. I use the Probyn' s (1990) .. metaphor of locatedness to unpack these themes and ponder the question, Where is experience located? I deepen the exploration by layering Drake's (2007) KnowlDo/Be framework alongside locatedness and offer descriptions of learning moments grounded in pedagogical theories. In the final phase, I introduce thirdspace theory (Bhabha, 1994; Soja, 1996) as a space that allowed me to puzzle educational dilemmas and begin to reconcile the binaries that existed in my life both personally, and professionally. I end where I began by revisiting the questions that drove this study. In addition, Ireflect upon the writing process and the challenges that I encountered while immersed in this approach and contemplate the relevance of conducting a self-study. I leave the reader with what is waiting for me on the other side of the gate, for as Henry James suggested, "Experience is never limited, and it is never complete."

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As the ideal method of assessing the nutritive value of a feedstuff, namely offering it to the appropriate class of animal and recording the production response obtained, is neither practical nor cost effective a range of feed evaluation techniques have been developed. Each of these balances some degree of compromise with the practical situation against data generation. However, due to the impact of animal-feed interactions over and above that of feed composition, the target animal remains the ultimate arbitrator of nutritional value. In this review current in vitro feed evaluation techniques are examined according to the degree of animal-feed interaction. Chemical analysis provides absolute values and therefore differs from the majority of in vitro methods that simply rank feeds. However, with no host animal involvement, estimates of nutritional value are inferred by statistical association. In addition given the costs involved, the practical value of many analyses conducted should be reviewed. The in sacco technique has made a substantial contribution to both understanding rumen microbial degradative processes and the rapid evaluation of feeds, especially in developing countries. However, the numerous shortfalls of the technique, common to many in vitro methods, the desire to eliminate the use of surgically modified animals for routine feed evaluation, paralleled with improvements in in vitro techniques, will see this technique increasingly replaced. The majority of in vitro systems use substrate disappearance to assess degradation, however, this provides no information regarding the quantity of derived end-products available to the host animal. As measurement of volatile fatty acids or microbial biomass production greatly increases analytical costs, fermentation gas release, a simple and non-destructive measurement, has been used as an alternative. However, as gas release alone is of little use, gas-based systems, where both degradation and fermentation gas release are measured simultaneously, are attracting considerable interest. Alternative microbial inocula are being considered, as is the potential of using multi-enzyme systems to examine degradation dynamics. It is concluded that while chemical analysis will continue to form an indispensable part of feed evaluation, enhanced use will be made of increasingly complex in vitro systems. It is vital, however, the function and limitations of each methodology are fully understood and that the temptation to over-interpret the data is avoided so as to draw the appropriate conclusions. With careful selection and correct application in vitro systems offer powerful research tools with which to evaluate feedstuffs. (C) 2003 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Although difference-stationary (DS) and trend-stationary (TS) processes have been subject to considerable analysis, there are no direct comparisons for each being the data-generation process (DGP). We examine incorrect choice between these models for forecasting for both known and estimated parameters. Three sets of Monte Carlo simulations illustrate the analysis, to evaluate the biases in conventional standard errors when each model is mis-specified, compute the relative mean-square forecast errors of the two models for both DGPs, and investigate autocorrelated errors, so both models can better approximate the converse DGP. The outcomes are surprisingly different from established results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider forecasting using a combination, when no model coincides with a non-constant data generation process (DGP). Practical experience suggests that combining forecasts adds value, and can even dominate the best individual device. We show why this can occur when forecasting models are differentially mis-specified, and is likely to occur when the DGP is subject to location shifts. Moreover, averaging may then dominate over estimated weights in the combination. Finally, it cannot be proved that only non-encompassed devices should be retained in the combination. Empirical and Monte Carlo illustrations confirm the analysis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Allt eftersom utvecklingen går framåt inom applikationer och system så förändras också sättet på vilket vi interagerar med systemet på. Hittills har navigering och användning av applikationer och system mestadels skett med händerna och då genom mus och tangentbord. På senare tid så har navigering via touch-skärmar och rösten blivit allt mer vanligt. Då man ska styra en applikation med hjälp av rösten är det viktigt att vem som helst kan styra applikationen, oavsett vilken dialekt man har. För att kunna se hur korrekt ett röstigenkännings-API (Application Programming Interface) uppfattar svenska dialekter så initierades denna studie med dokumentstudier om dialekters kännetecken och ljudkombinationer. Dessa kännetecken och ljudkombinationer låg till grund för de ord vi valt ut till att testa API:et med. Varje dialekt fick alltså ett ord uppbyggt för att vara extra svårt för API:et att uppfatta när det uttalades av just den aktuella dialekten. Därefter utvecklades en prototyp, närmare bestämt en android-applikation som fungerade som ett verktyg i datainsamlingen. Då arbetet innehåller en prototyp och en undersökning så valdes Design and Creation Research som forskningsstrategi med datainsamlingsmetoderna dokumentstudier och observationer för att få önskat resultat. Data samlades in via observationer med prototypen som hjälpmedel och med hjälp av dokumentstudier. Det empiriska data som registrerats via observationerna och med hjälp av applikationen påvisade att vissa dialekter var lättare för API:et att uppfatta korrekt. I vissa fall var resultaten väntade då vissa ord uppbyggda av ljudkombinationer i enlighet med teorin skulle uttalas väldigt speciellt av en viss dialekt. Ibland blev det väldigt låga resultat på just dessa ord men i andra fall förvånansvärt höga. Slutsatsen vi drog av detta var att de ord vi valt ut med en baktanke om att de skulle få låga resultat för den speciella dialekten endast visade sig stämma vid två tillfällen. Det var istället det ord innehållande sje- och tje-ljud som enligt teorin var gemensamma kännetecken för alla dialekter som fick lägst resultat överlag.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A number of recent works have introduced statistical methods for detecting genetic loci that affect phenotypic variability, which we refer to as variability-controlling quantitative trait loci (vQTL). These are genetic variants whose allelic state predicts how much phenotype values will vary about their expected means. Such loci are of great potential interest in both human and non-human genetic studies, one reason being that a detected vQTL could represent a previously undetected interaction with other genes or environmental factors. The simultaneous publication of these new methods in different journals has in many cases precluded opportunity for comparison. We survey some of these methods, the respective trade-offs they imply, and the connections between them. The methods fall into three main groups: classical non-parametric, fully parametric, and semi-parametric two-stage approximations. Choosing between alternatives involves balancing the need for robustness, flexibility, and speed. For each method, we identify important assumptions and limitations, including those of practical importance, such as their scope for including covariates and random effects. We show in simulations that both parametric methods and their semi-parametric approximations can give elevated false positive rates when they ignore mean-variance relationships intrinsic to the data generation process. We conclude that choice of method depends on the trait distribution, the need to include non-genetic covariates, and the population size and structure, coupled with a critical evaluation of how these fit with the assumptions of the statistical model.