963 resultados para Predictive regression


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study investigates the role of social media as a form of organizational knowledge sharing. Social media is investigated in terms of the Web 2.0 technologies that organizations provide their employees as tools of internal communication. This study is anchored in the theoretical understanding of social media as technologies which enable both knowledge collection and knowledge donation. This study investigates the factors influencing employees’ use of social media in their working environment. The study presents the multidisciplinary research tradition concerning knowledge sharing. Social media is analyzed especially in relation to internal communication and knowledge sharing. Based on previous studies, it is assumed that personal, organizational, and technological factors influence employees’ use of social media in their working environment. The research represents a case study focusing on the employees of the Finnish company Wärtsilä. Wärtsilä represents an eligible case organization for this study given that it puts in use several Web 2.0 tools in its intranet. The research is based on quantitative methods. In total 343 answers were obtained with the aid of an online survey which was available in Wärtsilä’s intranet. The associations between the variables are analyzed with the aid of correlations. Finally, with the aid of multiple linear regression analysis the causality between the assumed factors and the use of social media is tested. The analysis demonstrates that personal, organizational and technological factors influence the respondents’ use of social media. As strong predictive variables emerge the benefits that respondents expect to receive from using social media and respondents’ experience in using Web 2.0 in their private lives. Also organizational factors such as managers’ and colleagues’ activeness and organizational guidelines for using social media form a causal relationship with the use of social media. In addition, respondents’ understanding of their responsibilities affects their use of social media. The more social media is considered as a part of individual responsibilities, the more frequently social media is used. Finally, technological factors must be recognized. The more user-friendly social media tools are considered and the better technical skills respondents have, the more frequently social media is used in the working environment. The central references in relation to knowledge sharing include Chun Wei Choo’s (2006) work Knowing Organization, Ikujiro Nonaka and Hirotaka Takeuchi’s (1995) work The Knowledge Creating Company and Linda Argote’s (1999) work Organizational Learning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present two new support vector approaches for ordinal regression. These approaches find the concentric spheres with minimum volume that contain most of the training samples. Both approaches guarantee that the radii of the spheres are properly ordered at the optimal solution. The size of the optimization problem is linear in the number of training samples. The popular SMO algorithm is adapted to solve the resulting optimization problem. Numerical experiments on some real-world data sets verify the usefulness of our approaches for data mining.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Processor architects have a challenging task of evaluating a large design space consisting of several interacting parameters and optimizations. In order to assist architects in making crucial design decisions, we build linear regression models that relate Processor performance to micro-architecture parameters, using simulation based experiments. We obtain good approximate models using an iterative process in which Akaike's information criteria is used to extract a good linear model from a small set of simulations, and limited further simulation is guided by the model using D-optimal experimental designs. The iterative process is repeated until desired error bounds are achieved. We used this procedure to establish the relationship of the CPI performance response to 26 key micro-architectural parameters using a detailed cycle-by-cycle superscalar processor simulator The resulting models provide a significance ordering on all micro-architectural parameters and their interactions, and explain the performance variations of micro-architectural techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

During the course of genome studies in a rural community in the South Indian state of Karnataka, DNA-based investigations and counselling for familial adenomatous polyposis (FAP) were requested via the community physician. The proposita died in 1940 and FAP had been clinically diagnosed in 2 of her 5 children, both deceased. DNA samples from 2 affected individuals in the third generation were screened for mutations in the APC gene, and a frame-shift mutation was identified in exon 15 with a common deletion at codon 1061. Predictive testing for the mutation was then organized on a voluntary basis. There were 11 positive tests, including confirmatory positives on 2 persons diagnosed by colonoscopy, and to date surgery has been successfully undertaken on 3 previously undiagnosed adults. The ongoing success of the study indicates that, with appropriate access to the facilities offered by collaborating centres, predictive testing is feasible for diseases such as FAP and could be of significant benefit to communities in economically less developed countries.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, I study the changing ladscape and human environment of the Mätäjoki Valley, West-Helsinki, using reconstructions and predictive modelling. The study is a part of a larger project funded by the city of Helsinki aming to map the past of the Mätäjoki Valley. The changes in landscape from an archipelago in the Ancylus Lake to a river valley are studied from 10000 to 2000 years ago. Alongside shore displacement, we look at the changing environment from human perspective and predict the location of dwelling sitesat various times. As a result, two map series were produced that show how the landscape changed and where inhabitance is predicted. To back them up, we have also looked at what previous research says about the history of the waterways, climate, vegetation and archaeology. The changing landscape of the river valley is reconstructed using GIS methods. For this purpose, new laser point data set was used and at the same time tested in the context landscape modelling. Dwelling sites were modeled with logistic regression analysis. The spatial predictive model combines data on the locations of the known dwelling sites, environmental factors and shore displacement data. The predictions were visualised into raster maps that show the predictions for inhabitance 3000 and 5000 years ago. The aim of these maps was to help archaeologists map potential spots for human activity. The produced landscape reconstructions clarified previous shore displacement studies of the Mätäjoki region and provided new information on the location of shoreline. From the shore displacement history of the Mätäjoki Valley arise the following stages: 1. The northernmost hills of the Mätäjoki Valley rose from Ancylus Lake approximately 10000 years ago. Shore displacement was fast during the following thousand years. 2. The area was an archipelago with a relatively steady shoreline 9000 7000 years ago. 8000 years ago the shoreline drew back in the middle and southern parts of the river valley because of the transgression of the Litorina Sea. 3. Mätäjoki was a sheltered bay of the Litorina Sea 6000 5000 years ago. The Vantaanjoki River started to flow into the Mätäjoki Valley approximately 5000 years ago. 4. The sediment plains in the southern part of the river valley rose from the sea rather quickly 5000 3000 years ago. Salt water still pushed its way into the southermost part of the valley 4000 years ago. 5. The shoreline proceeded to Pitäjänmäki rapids where it stayed at least a thousand years 3000 2000 years ago. The predictive models managed to predict the locations of dwelling sites moderately well. The most accurate predictions were found on the eastern shore and Malminkartano area. Of the environment variables sand and aspect of slope were found to have the best predictive power. From the results of this study we can conclude that the Mätäjoki Valley has been a favorable location to live especially 6000 5000 years ago when the climate was mild and vegetation lush. The laser point data set used here works best in shore displacement studies located in rural areas or if further specific palaeogeographic or hydrologic analysis in the research area is not needed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The factors affecting the non-industrial, private forest landowners' (hereafter referred to using the acronym NIPF) strategic decisions in management planning are studied. A genetic algorithm is used to induce a set of rules predicting potential cut of the landowners' choices of preferred timber management strategies. The rules are based on variables describing the characteristics of the landowners and their forest holdings. The predictive ability of a genetic algorithm is compared to linear regression analysis using identical data sets. The data are cross-validated seven times applying both genetic algorithm and regression analyses in order to examine the data-sensitivity and robustness of the generated models. The optimal rule set derived from genetic algorithm analyses included the following variables: mean initial volume, landowner's positive price expectations for the next eight years, landowner being classified as farmer, and preference for the recreational use of forest property. When tested with previously unseen test data, the optimal rule set resulted in a relative root mean square error of 0.40. In the regression analyses, the optimal regression equation consisted of the following variables: mean initial volume, proportion of forestry income, intention to cut extensively in future, and positive price expectations for the next two years. The R2 of the optimal regression equation was 0.34 and the relative root mean square error obtained from the test data was 0.38. In both models, mean initial volume and positive stumpage price expectations were entered as significant predictors of potential cut of preferred timber management strategy. When tested with the complete data set of 201 observations, both the optimal rule set and the optimal regression model achieved the same level of accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gaussian Processes (GPs) are promising Bayesian methods for classification and regression problems. They have also been used for semi-supervised learning tasks. In this paper, we propose a new algorithm for solving semi-supervised binary classification problem using sparse GP regression (GPR) models. It is closely related to semi-supervised learning based on support vector regression (SVR) and maximum margin clustering. The proposed algorithm is simple and easy to implement. It gives a sparse solution directly unlike the SVR based algorithm. Also, the hyperparameters are estimated easily without resorting to expensive cross-validation technique. Use of sparse GPR model helps in making the proposed algorithm scalable. Preliminary results on synthetic and real-world data sets demonstrate the efficacy of the new algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an optimization algorithm for an ammonia reactor based on a regression model relating the yield to several parameters, control inputs and disturbances. This model is derived from the data generated by hybrid simulation of the steady-state equations describing the reactor behaviour. The simplicity of the optimization program along with its ability to take into account constraints on flow variables make it best suited in supervisory control applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Energiataseen mallinnus on osa KarjaKompassi-hankkeeseen liittyvää kehitystyötä. Tutkielman tavoitteena oli kehittää lypsylehmän energiatasetta etukäteen ennustavia ja tuotoskauden aikana saatavia tietoja hyödyntäviä matemaattisia malleja. Selittävinä muuttujina olivat dieetti-, rehu-, maitotuotos-, koelypsy-, elopaino- ja kuntoluokkatiedot. Tutkimuksen aineisto kerättiin 12 Suomessa tehdyistä 8 – 28 laktaatioviikon pituisesta ruokintakokeesta, jotka alkoivat heti poikimisen jälkeen. Mukana olleista 344 lypsylehmästä yksi neljäsosa oli friisiläis- ja loput ayshire-rotuisia. Vanhempien lehmien päätiedosto sisälsi 2647 havaintoa (koe * lehmä * laktaatioviikko) ja ensikoiden 1070. Aineisto käsiteltiin SAS-ohjelmiston Mixed-proseduuria käyttäen ja poikkeavat havainnot poistettiin Tukeyn menetelmällä. Korrelaatioanalyysillä tarkasteltiin energiataseen ja selittävien muuttujien välisiä yhteyksiä. Energiatase mallinnettiin regressioanalyysillä. Laktaatiopäivän vaikutusta energiataseeseen selitettiin viiden eri funktion avulla. Satunnaisena tekijänä mallissa oli lehmä kokeen sisällä. Mallin sopivuutta aineistoon tarkasteltiin jäännösvirheen, selitysasteen ja Bayesin informaatiokriteerin avulla. Parhaat mallit testattiin riippumattomassa aineistossa. Laktaatiopäivän vaikutusta energiataseeseen selitti hyvin Ali-Schaefferin funktio, jota käytettiin perusmallina. Kaikissa energiatasemalleissa vaihtelu kasvoi laktaatioviikosta 12. alkaen, kun havaintojen määrä väheni ja energiatase muuttui positiiviseksi. Ennen poikimista käytettävissä olevista muuttujista dieetin väkirehuosuus ja väkirehun syönti-indeksi paransivat selitysastetta ja pienensivät jäännösvirhettä. Ruokinnan onnistumista voidaan seurata maitotuotoksen, maidon rasvapitoisuuden ja rasva-valkuaissuhteen tai EKM:n sisältävillä malleilla. EKM:n vakiointi pienensi mallin jäännösvirhettä. Elopaino ja kuntoluokka olivat heikkoja selittäjiä. Malleja voidaan hyödyntää karjatason ruokinnan suunnittelussa ja seurannassa, mutta yksittäisen lehmän energiataseen ennustamiseen ne eivät sovellu.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: In higher primates, although LH/CG play a critical role in the control of corpus luteum (CL) function, the direct effects of progesterone (P4) in the maintenance of CL structure and function are unclear. Several experiments were conducted in the bonnet monkey to examine direct effects of P4 on gene expression changes in the CL, during induced luteolysis and the late luteal phase of natural cycles. Methods: To identify differentially expressed genes encoding PR, PR binding factors, cofactors and PR downstream signaling target genes, the genome-wide analysis data generated in CL of monkeys after LH/P-4 depletion and LH replacement were mined and validated by real-time RT-PCR analysis. Initially, expression of these P4 related genes were determined in CL during different stages of luteal phase. The recently reported model system of induced luteolysis, yet capable of responsive to tropic support, afforded an ideal situation to examine direct effects of P4 on structure and function of CL. For this purpose, P4 was infused via ALZET pumps into monkeys 24 h after LH/P4 depletion to maintain mid luteal phase circulating P4 concentration (P4 replacement). In another experiment, exogenous P4 was supplemented during late luteal phase to mimic early pregnancy. Results: Based on the published microarray data, 45 genes were identified to be commonly regulated by LH and P4. From these 19 genes belonging to PR signaling were selected to determine their expression in LH/P-4 depletion and P4 replacement experiments. These 19 genes when analyzed revealed 8 genes to be directly responsive to P4, whereas the other genes to be regulated by both LH and P4. Progesterone supplementation for 24 h during the late luteal phase also showed changes in expression of 17 out of 19 genes examined. Conclusion: These results taken together suggest that P4 regulates, directly or indirectly, expression of a number of genes involved in the CL structure and function.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background—Mutations of the APC gene cause familial adenomatous polyposis (FAP), a hereditary colorectal cancer predisposition syndrome.Aims—To conduct a cost comparison analysis of predictive genetic testing versus conventional clinical screening for individuals at risk of inheriting FAP, using the perspective of a third party payer. Methods—All direct health care costs for both screening strategies were measured according to time and motion, and the expected costs evaluated using a decision analysis model.Results—The baseline analysis predicted that screening a prototype FAP family would cost $4975/£3109 by molecular testingand $8031/£5019 by clinical screening strategy, when family members were monitored with the same frequency of clinical surveillance (every two to three years). Sensitivity analyses revealed that the genetic testing approach is cost saving for key variables including the kindred size, the age of screening onset, and the cost of mutation identification in a proband. However, if the APC mutation carriers were monitored at an increased (annual) frequency, the cost of the genetic screening strategy increased to $7483/ £4677 and was especially sensitive to variability in age of onset of screening, family size, and cost of genetic testing of at risk relatives. Conclusions—In FAP kindreds, a predictive genetic testing strategy costs less than conventional clinical screening, provided that the frequency of surveillance is identical using either strategy. An additional significant benefit is the elimination of unnecessary colonic examinations for those family members found to be noncarriers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Perfect or even mediocre weather predictions over a long period are almost impossible because of the ultimate growth of a small initial error into a significant one. Even though the sensitivity of initial conditions limits the predictability in chaotic systems, an ensemble of prediction from different possible initial conditions and also a prediction algorithm capable of resolving the fine structure of the chaotic attractor can reduce the prediction uncertainty to some extent. All of the traditional chaotic prediction methods in hydrology are based on single optimum initial condition local models which can model the sudden divergence of the trajectories with different local functions. Conceptually, global models are ineffective in modeling the highly unstable structure of the chaotic attractor. This paper focuses on an ensemble prediction approach by reconstructing the phase space using different combinations of chaotic parameters, i.e., embedding dimension and delay time to quantify the uncertainty in initial conditions. The ensemble approach is implemented through a local learning wavelet network model with a global feed-forward neural network structure for the phase space prediction of chaotic streamflow series. Quantification of uncertainties in future predictions are done by creating an ensemble of predictions with wavelet network using a range of plausible embedding dimensions and delay times. The ensemble approach is proved to be 50% more efficient than the single prediction for both local approximation and wavelet network approaches. The wavelet network approach has proved to be 30%-50% more superior to the local approximation approach. Compared to the traditional local approximation approach with single initial condition, the total predictive uncertainty in the streamflow is reduced when modeled with ensemble wavelet networks for different lead times. Localization property of wavelets, utilizing different dilation and translation parameters, helps in capturing most of the statistical properties of the observed data. The need for taking into account all plausible initial conditions and also bringing together the characteristics of both local and global approaches to model the unstable yet ordered chaotic attractor of a hydrologic series is clearly demonstrated.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper introduces a scheme for classification of online handwritten characters based on polynomial regression of the sampled points of the sub-strokes in a character. The segmentation is done based on the velocity profile of the written character and this requires a smoothening of the velocity profile. We propose a novel scheme for smoothening the velocity profile curve and identification of the critical points to segment the character. We also porpose another method for segmentation based on the human eye perception. We then extract two sets of features for recognition of handwritten characters. Each sub-stroke is a simple curve, a part of the character, and is represented by the distance measure of each point from the first point. This forms the first set of feature vector for each character. The second feature vector are the coeficients obtained from the B-splines fitted to the control knots obtained from the segmentation algorithm. The feature vector is fed to the SVM classifier and it indicates an efficiency of 68% using the polynomial regression technique and 74% using the spline fitting method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We address the problem of local-polynomial modeling of smooth time-varying signals with unknown functional form, in the presence of additive noise. The problem formulation is in the time domain and the polynomial coefficients are estimated in the pointwise minimum mean square error (PMMSE) sense. The choice of the window length for local modeling introduces a bias-variance tradeoff, which we solve optimally by using the intersection-of-confidence-intervals (ICI) technique. The combination of the local polynomial model and the ICI technique gives rise to an adaptive signal model equipped with a time-varying PMMSE-optimal window length whose performance is superior to that obtained by using a fixed window length. We also evaluate the sensitivity of the ICI technique with respect to the confidence interval width. Simulation results on electrocardiogram (ECG) signals show that at 0dB signal-to-noise ratio (SNR), one can achieve about 12dB improvement in SNR. Monte-Carlo performance analysis shows that the performance is comparable to the basic wavelet techniques. For 0 dB SNR, the adaptive window technique yields about 2-3dB higher SNR than wavelet regression techniques and for SNRs greater than 12dB, the wavelet techniques yield about 2dB higher SNR.