784 resultados para resource-based theory (RBV)
Resumo:
High density oligonucleotide expression arrays are a widely used tool for the measurement of gene expression on a large scale. Affymetrix GeneChip arrays appear to dominate this market. These arrays use short oligonucleotides to probe for genes in an RNA sample. Due to optical noise, non-specific hybridization, probe-specific effects, and measurement error, ad-hoc measures of expression, that summarize probe intensities, can lead to imprecise and inaccurate results. Various researchers have demonstrated that expression measures based on simple statistical models can provide great improvements over the ad-hoc procedure offered by Affymetrix. Recently, physical models based on molecular hybridization theory, have been proposed as useful tools for prediction of, for example, non-specific hybridization. These physical models show great potential in terms of improving existing expression measures. In this paper we demonstrate that the system producing the measured intensities is too complex to be fully described with these relatively simple physical models and we propose empirically motivated stochastic models that compliment the above mentioned molecular hybridization theory to provide a comprehensive description of the data. We discuss how the proposed model can be used to obtain improved measures of expression useful for the data analysts.
Resumo:
Traditional methods do not actually measure peoples’ risk attitude naturally and precisely. Therefore, a fuzzy risk attitude classification method is developed. Since the prospect theory is usually considered as an effective model of decision making, the personalized parameters in prospect theory are firstly fuzzified to distinguish people with different risk attitudes, and then a fuzzy classification database schema is applied to calculate the exact value of risk value attitude and risk be- havior attitude. Finally, by applying a two-hierarchical clas- sification model, the precise value of synthetical risk attitude can be acquired.
Resumo:
Despite long-standing calls for patient-focused research on individuals with generalized anxiety spectrum disorder there is little systematized knowledge about the in-session behaviors of these patients. The primary objective of this study was to describe of in-session trajectories of the patients' level of explication (as an indicator of an elaborated exposure of negative emotionality) and the patients' focus on their own resources and how these trajectories are associated with post-treatment outcome. In respect to GAD patients, a high level of explication might be seen as an indicator of successful exposure of avoided negative emotionality during therapy sessions. Observers made minute-by-minute ratings of 1100 minutes of video of 20 patients-therapists dyads. The results indicated that a higher level of explication generally observed at a later stage during the therapy sessions and the patients' focus on competencies at an early stage was highly associated with positive therapy outcome at assessment at post treatment, independent of pretreatment distress, rapid response of well-being and symptom reduction, as well as the therapists' professional experience and therapy lengths. These results will be discussed under the perspective of emotion regulation of patients and therapist's counterregulation. It is assumed that GAD-Patients are especially skilled in masking difficult emotions. Explication level and emotion regulation are important variables for this patient group but there's relation to outcome is different.
Resumo:
Spurred by the consumer market, companies increasingly deploy smartphones or tablet computers in their operations. However, unlike private users, companies typically struggle to cover their needs with existing applications, and therefore expand mobile software platforms through customized applications from multiple software vendors. Companies thereby combine the concepts of multi-sourcing and software platform ecosystems in a novel platform-based multi-sourcing setting. This implies, however, the clash of two different approaches towards the coordination of the underlying one-to-many inter-organizational relationships. So far, however, little is known about impacts of merging coordination approaches. Relying on convention theory, we addresses this gap by analyzing a platform-based multi-sourcing project between a client and six software vendors, that develop twenty-three custom-made applications on a common platform (Android). In doing so, we aim to understand how unequal coordination approaches merge, and whether and for what reason particular coordination mechanisms, design decisions, or practices disappear, while new ones emerge.
Resumo:
Recent theoretical work has examined the spatial distribution of unemployment using the efficiency wage model as the mechanism by which unemployment arises in the urban economy. This paper extends the standard efficiency wage model in order to allow for behavioral substitution between leisure time at home and effort at work. In equilibrium, residing at a location with a long commute affects the time available for leisure at home and therefore affects the trade-off between effort at work and risk of unemployment. This model implies an empirical relationship between expected commutes and labor market outcomes, which is tested using the Public Use Microdata sample of the 2000 U.S. Decennial Census. The empirical results suggest that efficiency wages operate primarily for blue collar workers, i.e. workers who tend to be in occupations that face higher levels of supervision. For this subset of workers, longer commutes imply higher levels of unemployment and higher wages, which are both consistent with shirking and leisure being substitutable.
Resumo:
In an increasing number of applications (e.g., in embedded, real-time, or mobile systems) it is important or even essential to ensure conformance with respect to a specification expressing resource usages, such as execution time, memory, energy, or user-defined resources. In previous work we have presented a novel framework for data size-aware, static resource usage verification. Specifications can include both lower and upper bound resource usage functions. In order to statically check such specifications, both upper- and lower-bound resource usage functions (on input data sizes) approximating the actual resource usage of the program which are automatically inferred and compared against the specification. The outcome of the static checking of assertions can express intervals for the input data sizes such that a given specification can be proved for some intervals but disproved for others. After an overview of the approach in this paper we provide a number of novel contributions: we present a full formalization, and we report on and provide results from an implementation within the Ciao/CiaoPP framework (which provides a general, unified platform for static and run-time verification, as well as unit testing). We also generalize the checking of assertions to allow preconditions expressing intervals within which the input data size of a program is supposed to lie (i.e., intervals for which each assertion is applicable), and we extend the class of resource usage functions that can be checked.
Resumo:
E-learning systems output a huge quantity of data on a learning process. However, it takes a lot of specialist human resources to manually process these data and generate an assessment report. Additionally, for formative assessment, the report should state the attainment level of the learning goals defined by the instructor. This paper describes the use of the granular linguistic model of a phenomenon (GLMP) to model the assessment of the learning process and implement the automated generation of an assessment report. GLMP is based on fuzzy logic and the computational theory of perceptions. This technique is useful for implementing complex assessment criteria using inference systems based on linguistic rules. Apart from the grade, the model also generates a detailed natural language progress report on the achieved proficiency level, based exclusively on the objective data gathered from correct and incorrect responses. This is illustrated by applying the model to the assessment of Dijkstra’s algorithm learning using a visual simulation-based graph algorithm learning environment, called GRAPHs