907 resultados para resource-based theory (RBV)


Relevância:

40.00% 40.00%

Publicador:

Resumo:

High density oligonucleotide expression arrays are a widely used tool for the measurement of gene expression on a large scale. Affymetrix GeneChip arrays appear to dominate this market. These arrays use short oligonucleotides to probe for genes in an RNA sample. Due to optical noise, non-specific hybridization, probe-specific effects, and measurement error, ad-hoc measures of expression, that summarize probe intensities, can lead to imprecise and inaccurate results. Various researchers have demonstrated that expression measures based on simple statistical models can provide great improvements over the ad-hoc procedure offered by Affymetrix. Recently, physical models based on molecular hybridization theory, have been proposed as useful tools for prediction of, for example, non-specific hybridization. These physical models show great potential in terms of improving existing expression measures. In this paper we demonstrate that the system producing the measured intensities is too complex to be fully described with these relatively simple physical models and we propose empirically motivated stochastic models that compliment the above mentioned molecular hybridization theory to provide a comprehensive description of the data. We discuss how the proposed model can be used to obtain improved measures of expression useful for the data analysts.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Traditional methods do not actually measure peoples’ risk attitude naturally and precisely. Therefore, a fuzzy risk attitude classification method is developed. Since the prospect theory is usually considered as an effective model of decision making, the personalized parameters in prospect theory are firstly fuzzified to distinguish people with different risk attitudes, and then a fuzzy classification database schema is applied to calculate the exact value of risk value attitude and risk be- havior attitude. Finally, by applying a two-hierarchical clas- sification model, the precise value of synthetical risk attitude can be acquired.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Despite long-standing calls for patient-focused research on individuals with generalized anxiety spectrum disorder there is little systematized knowledge about the in-session behaviors of these patients. The primary objective of this study was to describe of in-session trajectories of the patients' level of explication (as an indicator of an elaborated exposure of negative emotionality) and the patients' focus on their own resources and how these trajectories are associated with post-treatment outcome. In respect to GAD patients, a high level of explication might be seen as an indicator of successful exposure of avoided negative emotionality during therapy sessions. Observers made minute-by-minute ratings of 1100 minutes of video of 20 patients-therapists dyads. The results indicated that a higher level of explication generally observed at a later stage during the therapy sessions and the patients' focus on competencies at an early stage was highly associated with positive therapy outcome at assessment at post treatment, independent of pretreatment distress, rapid response of well-being and symptom reduction, as well as the therapists' professional experience and therapy lengths. These results will be discussed under the perspective of emotion regulation of patients and therapist's counterregulation. It is assumed that GAD-Patients are especially skilled in masking difficult emotions. Explication level and emotion regulation are important variables for this patient group but there's relation to outcome is different.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Spurred by the consumer market, companies increasingly deploy smartphones or tablet computers in their operations. However, unlike private users, companies typically struggle to cover their needs with existing applications, and therefore expand mobile software platforms through customized applications from multiple software vendors. Companies thereby combine the concepts of multi-sourcing and software platform ecosystems in a novel platform-based multi-sourcing setting. This implies, however, the clash of two different approaches towards the coordination of the underlying one-to-many inter-organizational relationships. So far, however, little is known about impacts of merging coordination approaches. Relying on convention theory, we addresses this gap by analyzing a platform-based multi-sourcing project between a client and six software vendors, that develop twenty-three custom-made applications on a common platform (Android). In doing so, we aim to understand how unequal coordination approaches merge, and whether and for what reason particular coordination mechanisms, design decisions, or practices disappear, while new ones emerge.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Recent theoretical work has examined the spatial distribution of unemployment using the efficiency wage model as the mechanism by which unemployment arises in the urban economy. This paper extends the standard efficiency wage model in order to allow for behavioral substitution between leisure time at home and effort at work. In equilibrium, residing at a location with a long commute affects the time available for leisure at home and therefore affects the trade-off between effort at work and risk of unemployment. This model implies an empirical relationship between expected commutes and labor market outcomes, which is tested using the Public Use Microdata sample of the 2000 U.S. Decennial Census. The empirical results suggest that efficiency wages operate primarily for blue collar workers, i.e. workers who tend to be in occupations that face higher levels of supervision. For this subset of workers, longer commutes imply higher levels of unemployment and higher wages, which are both consistent with shirking and leisure being substitutable.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In an increasing number of applications (e.g., in embedded, real-time, or mobile systems) it is important or even essential to ensure conformance with respect to a specification expressing resource usages, such as execution time, memory, energy, or user-defined resources. In previous work we have presented a novel framework for data size-aware, static resource usage verification. Specifications can include both lower and upper bound resource usage functions. In order to statically check such specifications, both upper- and lower-bound resource usage functions (on input data sizes) approximating the actual resource usage of the program which are automatically inferred and compared against the specification. The outcome of the static checking of assertions can express intervals for the input data sizes such that a given specification can be proved for some intervals but disproved for others. After an overview of the approach in this paper we provide a number of novel contributions: we present a full formalization, and we report on and provide results from an implementation within the Ciao/CiaoPP framework (which provides a general, unified platform for static and run-time verification, as well as unit testing). We also generalize the checking of assertions to allow preconditions expressing intervals within which the input data size of a program is supposed to lie (i.e., intervals for which each assertion is applicable), and we extend the class of resource usage functions that can be checked.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

E-learning systems output a huge quantity of data on a learning process. However, it takes a lot of specialist human resources to manually process these data and generate an assessment report. Additionally, for formative assessment, the report should state the attainment level of the learning goals defined by the instructor. This paper describes the use of the granular linguistic model of a phenomenon (GLMP) to model the assessment of the learning process and implement the automated generation of an assessment report. GLMP is based on fuzzy logic and the computational theory of perceptions. This technique is useful for implementing complex assessment criteria using inference systems based on linguistic rules. Apart from the grade, the model also generates a detailed natural language progress report on the achieved proficiency level, based exclusively on the objective data gathered from correct and incorrect responses. This is illustrated by applying the model to the assessment of Dijkstra’s algorithm learning using a visual simulation-based graph algorithm learning environment, called GRAPHs

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT) and Web of Things (WoT) are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i) to integrate sensing and actuating functionalities into everyday objects, and (ii) to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD) methodology based on the Model Driven Architecture (MDA). This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The objective of this paper is to design a path following control system for a car-like mobile robot using classical linear control techniques, so that it adapts on-line to varying conditions during the trajectory following task. The main advantages of the proposed control structure is that well known linear control theory can be applied in calculating the PID controllers to full control requirements, while at the same time it is exible to be applied in non-linear changing conditions of the path following task. For this purpose the Frenet frame kinematic model of the robot is linearised at a varying working point that is calculated as a function of the actual velocity, the path curvature and kinematic parameters of the robot, yielding a transfer function that varies during the trajectory. The proposed controller is formed by a combination of an adaptive PID and a feed-forward controller, which varies accordingly with the working conditions and compensates the non-linearity of the system. The good features and exibility of the proposed control structure have been demonstrated through realistic simulations that include both kinematics and dynamics of the car-like robot.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A novel algorithm based on bimatrix game theory has been developed to improve the accuracy and reliability of a speaker diarization system. This algorithm fuses the output data of two open-source speaker diarization programs, LIUM and SHoUT, taking advantage of the best properties of each one. The performance of this new system has been tested by means of audio streams from several movies. From preliminary results on fragments of five movies, improvements of 63% in false alarms and missed speech mistakes have been achieved with respect to LIUM and SHoUT systems working alone. Moreover, we also improve in a 20% the number of recognized speakers, getting close to the real number of speakers in the audio stream

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a description of our system for the Albayzin 2012 LRE competition. One of the main characteristics of this evaluation was the reduced number of available files for training the system, especially for the empty condition where no training data set was provided but only a development set. In addition, the whole database was created from online videos and around one third of the training data was labeled as noisy files. Our primary system was the fusion of three different i-vector based systems: one acoustic system based on MFCCs, a phonotactic system using trigrams of phone-posteriorgram counts, and another acoustic system based on RPLPs that improved robustness against noise. A contrastive system that included new features based on the glottal source was also presented. Official and postevaluation results for all the conditions using the proposed metrics for the evaluation and the Cavg metric are presented in the paper.