941 resultados para Human Information Processing
Resumo:
We explore how openness in terms of external linkages generates learning effects, which enable firms to generate more innovation outputs from any given breadth of external linkages. Openness to external knowledge sources, whether through search activity or linkages to external partners in new product development, involves a process of interaction and information processing. Such activities are likely to be subject to a learning process, as firms learn which knowledge sources and collaborative linkages are most useful to their particular needs, and which partnerships are most effective in delivering innovation performance. Using panel data from Irish manufacturing plants, we find evidence of such learning effects: establishments with substantial experience of external collaborations in previous periods derive more innovation output from openness in the current period. © 2013 The Authors. Strategic Management Journal published by John Wiley & Sons Ltd.
Resumo:
This paper describes the application of a model, initially developed for determining the e-business requirements of a manufacturing organization, to assess the impact of management concerns on the functions generated. The model has been tested on 13 case studies in small, medium and large organizations. This research shows that the incorporation of concerns for generating the requirements for e-business functions improves the results, because they expose issues that are of relevance to the decision making process relating to e-business. Running the model with both and without concerns, and then presenting the reasons for major variances, can expose the issues and enable them to be studied in detail at the individual function/ reason level. © IFIP International Federation for Information Processing 2013.
Resumo:
Processing information and forming opinions pose special challenges when attempting to effectively manage the new or complex tasks that typically arise in projects. Based on research in organizational and social psychology, we introduce mechanisms and strategies for collective information processing which are important for forming opinions and handling information in projects.
Resumo:
As one of the most popular deep learning models, convolution neural network (CNN) has achieved huge success in image information extraction. Traditionally CNN is trained by supervised learning method with labeled data and used as a classifier by adding a classification layer in the end. Its capability of extracting image features is largely limited due to the difficulty of setting up a large training dataset. In this paper, we propose a new unsupervised learning CNN model, which uses a so-called convolutional sparse auto-encoder (CSAE) algorithm pre-Train the CNN. Instead of using labeled natural images for CNN training, the CSAE algorithm can be used to train the CNN with unlabeled artificial images, which enables easy expansion of training data and unsupervised learning. The CSAE algorithm is especially designed for extracting complex features from specific objects such as Chinese characters. After the features of articficial images are extracted by the CSAE algorithm, the learned parameters are used to initialize the first CNN convolutional layer, and then the CNN model is fine-Trained by scene image patches with a linear classifier. The new CNN model is applied to Chinese scene text detection and is evaluated with a multilingual image dataset, which labels Chinese, English and numerals texts separately. More than 10% detection precision gain is observed over two CNN models.
Resumo:
Background: Recent morpho-functional evidence pointed out that abnormalities in the thalamus could play a major role in the expression of migraine neurophysiological and clinical correlates. Whether this phenomenon is primary or secondary to its functional disconnection from the brainstem remains to be determined. We used a Functional Source Separation algorithm of EEG signal to extract the activity of the different neuronal pools recruited at different latencies along the somatosensory pathway in interictal migraine without aura (MO) patients. Methods: Twenty MO patients and 20 healthy volunteers (HV) underwent EEG recording. Four ad-hoc functional constraints, two sub-cortical (FS14 at brainstem and FS16 at thalamic level) and two cortical (FS20 radial and FS22 tangential parietal sources), were used to extract the activity of successive stages of somatosensory information processing in response to the separate left and right median nerve electric stimulation. A band-pass digital filter (450-750 Hz) was applied offline in order to extract high-frequency oscillatory (HFO) activity from the broadband EEG signal. Results: In both stimulated sides, significant reduced sub-cortical brainstem (FS14) and thalamic (FS16) HFO activations characterized MO patients when compared with HV. No difference emerged in the two cortical HFO activations between the two groups. Conclusions: Present results are the first neurophysiological evidence supporting the hypothesis that a functional disconnection of the thalamus from the subcortical monoaminergic system may underline the interictal cortical abnormal information processing in migraine. Further studies are needed to investigate the precise directional connectivity across the entire primary subcortical and cortical somatosensory pathway in interictal MO. Written informed consent to publication was obtained from the patient(s).
Resumo:
Feature selection is important in medical field for many reasons. However, selecting important variables is a difficult task with the presence of censoring that is a unique feature in survival data analysis. This paper proposed an approach to deal with the censoring problem in endovascular aortic repair survival data through Bayesian networks. It was merged and embedded with a hybrid feature selection process that combines cox's univariate analysis with machine learning approaches such as ensemble artificial neural networks to select the most relevant predictive variables. The proposed algorithm was compared with common survival variable selection approaches such as; least absolute shrinkage and selection operator LASSO, and Akaike information criterion AIC methods. The results showed that it was capable of dealing with high censoring in the datasets. Moreover, ensemble classifiers increased the area under the roc curves of the two datasets collected from two centers located in United Kingdom separately. Furthermore, ensembles constructed with center 1 enhanced the concordance index of center 2 prediction compared to the model built with a single network. Although the size of the final reduced model using the neural networks and its ensembles is greater than other methods, the model outperformed the others in both concordance index and sensitivity for center 2 prediction. This indicates the reduced model is more powerful for cross center prediction.
Resumo:
Supply chains comprise of complex processes spanning across multiple trading partners. The various operations involved generate large number of events that need to be integrated in order to enable internal and external traceability. Further, provenance of artifacts and agents involved in the supply chain operations is now a key traceability requirement. In this paper we propose a Semantic web/Linked data powered framework for the event based representation and analysis of supply chain activities governed by the EPCIS specification. We specifically show how a new EPCIS event type called "Transformation Event" can be semantically annotated using EEM - The EPCIS Event Model to generate linked data, that can be exploited for internal event based traceability in supply chains involving transformation of products. For integrating provenance with traceability, we propose a mapping from EEM to PROV-O. We exemplify our approach on an abstraction of the production processes that are part of the wine supply chain.
Resumo:
The current study was designed to build on and extend the existing knowledge base of factors that cause, maintain, and influence child molestation. Theorized links among the type of offender and the offender's levels of moral development and social competence in the perpetration of child molestation were investigated. The conceptual framework for the study is based on the cognitive developmental stages of moral development as proposed by Kohlberg, the unified theory, or Four-Preconditions Model, of child molestation as proposed by Finkelhor, and the Information-Processing Model of Social Skills as proposed by McFall. The study sample consisted of 127 adult male child molesters participating in outpatient group therapy. All subjects completed a Self-Report Questionnaire which included questions designed to obtain relevant demographic data, questions similar to those used by the researchers for the Massachusetts Treatment Center: Child Molester Typology 3's social competency dimension, the Defining Issues Test (DIT) short form, the Social Avoidance and Distress Scale (SADS), the Rathus Assertiveness Schedule (RAS), and the Questionnaire Measure of Empathic Tendency (Empathy Scale). Data were analyzed utilizing confirmatory factor analysis, t-tests, and chi-square statistics. Partial support was found for the hypothesis that moral development is a separate but correlated construct from social competence. As predicted, although the actual mean score differences were small, a statistically significant difference was found in the current study between the mean DITP scores of the subject sample and that of the general male population, suggesting that child molesters, as a group, function at a lower level of moral development than does the general male population, and the situational offenders in the study sample demonstrated a statistically significantly higher level of moral development than the preferential offenders. The data did not support the hypothesis that situational offenders will demonstrate lower levels of social competence than preferential offenders. Relatively little significance is placed on this finding, however, because the measure for the social competency variable was likely subject to considerable measurement error in that the items used as indicators were not clearly defined. The last hypothesis, which involved the potential differences in social anxiety, assertion skills, and empathy between the situational and preferential offender types, was not supported by the data. ^
Resumo:
Query processing is a commonly performed procedure and a vital and integral part of information processing. It is therefore important and necessary for information processing applications to continuously improve the accessibility of data sources as well as the ability to perform queries on those data sources. ^ It is well known that the relational database model and the Structured Query Language (SQL) are currently the most popular tools to implement and query databases. However, a certain level of expertise is needed to use SQL and to access relational databases. This study presents a semantic modeling approach that enables the average user to access and query existing relational databases without the concern of the database's structure or technicalities. This method includes an algorithm to represent relational database schemas in a more semantically rich way. The result of which is a semantic view of the relational database. The user performs queries using an adapted version of SQL, namely Semantic SQL. This method substantially reduces the size and complexity of queries. Additionally, it shortens the database application development cycle and improves maintenance and reliability by reducing the size of application programs. Furthermore, a Semantic Wrapper tool illustrating the semantic wrapping method is presented. ^ I further extend the use of this semantic wrapping method to heterogeneous database management. Relational, object-oriented databases and the Internet data sources are considered to be part of the heterogeneous database environment. Semantic schemas resulting from the algorithm presented in the method were employed to describe the structure of these data sources in a uniform way. Semantic SQL was utilized to query various data sources. As a result, this method provides users with the ability to access and perform queries on heterogeneous database systems in a more innate way. ^
Resumo:
A methodology for formally modeling and analyzing software architecture of mobile agent systems provides a solid basis to develop high quality mobile agent systems, and the methodology is helpful to study other distributed and concurrent systems as well. However, it is a challenge to provide the methodology because of the agent mobility in mobile agent systems.^ The methodology was defined from two essential parts of software architecture: a formalism to define the architectural models and an analysis method to formally verify system properties. The formalism is two-layer Predicate/Transition (PrT) nets extended with dynamic channels, and the analysis method is a hierarchical approach to verify models on different levels. The two-layer modeling formalism smoothly transforms physical models of mobile agent systems into their architectural models. Dynamic channels facilitate the synchronous communication between nets, and they naturally capture the dynamic architecture configuration and agent mobility of mobile agent systems. Component properties are verified based on transformed individual components, system properties are checked in a simplified system model, and interaction properties are analyzed on models composing from involved nets. Based on the formalism and the analysis method, this researcher formally modeled and analyzed a software architecture of mobile agent systems, and designed an architectural model of a medical information processing system based on mobile agents. The model checking tool SPIN was used to verify system properties such as reachability, concurrency and safety of the medical information processing system. ^ From successful modeling and analyzing the software architecture of mobile agent systems, the conclusion is that PrT nets extended with channels are a powerful tool to model mobile agent systems, and the hierarchical analysis method provides a rigorous foundation for the modeling tool. The hierarchical analysis method not only reduces the complexity of the analysis, but also expands the application scope of model checking techniques. The results of formally modeling and analyzing the software architecture of the medical information processing system show that model checking is an effective and an efficient way to verify software architecture. Moreover, this system shows a high level of flexibility, efficiency and low cost of mobile agent technologies. ^
Resumo:
The purpose of this phenomenological study was to describe how Colombian adult English language learners (ELL) select and use language learning strategies (LLS). This study used Oxford’s (1990a) taxonomy for LLS as its theoretical framework. Semi-structured interviews and a focus group interview, were conducted, transcribed, and analyzed for 12 Colombian adult ELL. A communicative activity known as strip story (Gibson, 1975) was used to elicit participants’ use of LLS. This activity preceded the focus group session. Additionally, participants’ reflective journals were collected and analyzed. Data were analyzed using inductive, deductive, and comparative analyses. Four themes emerged from the inductive analysis of the data: (a) learning conditions, (b) problem-solving resources, (c) information processing, and (d) target language practice. Oxford’s classification of LLS was used as a guide in deductively analyzing data concerning the participants’ experiences. The deductive analysis revealed that participants do not use certain strategies included in Oxford’s taxonomy at the third level. For example, semantic mapping, or physical response or sensation was not reported by participants. The findings from the inductive and deductive analyses were then compared to look for patterns and answers to the research questions. The comparative analysis revealed that participants used additional LLS that are not included in Oxford’s taxonomy. Some examples of these strategies are: using sound transcription in native language and help from children. The study was conducted at the MDC InterAmerican campus in South Florida, one of the largest Hispanic-influenced communities in the U.S. Based on the findings from this study, the researcher proposed a framework to study LLS that includes both external (i.e., learning context, community) and internal (i.e., culture, prior education) factors that influence the selection and use of LLS. The findings from this study imply that given the importance of the both external and internal factors in learners’ use of LLS, these factors should be considered for inclusion in any study of language learner strategies use by adult learners. Implications for teaching and learning as well as recommendations for further research are provided.
Resumo:
While most studies take a dyadic view when examining the environmental difference between the home country of a multinational enterprise (MNE) and a particular foreign country, they ignore that an MNE is managing a network of subsidiaries embedded in diverse environments. Additionally, neither the impacts of global environments on top executives nor the effects of top executives’ capabilities to handle institutional complexity are fully explored. Thus, using a three-essay format, this dissertation tried to fill these gaps by addressing the effects of institutional complexity and top management characteristics on top executive compensation and firm performance. ^ Essay 1 investigated the impact of an MNE’s institutional complexity, or the diversity of national institutions facing an MNE’s network of subsidiaries, on the top management team (TMT) compensation. This essay proposed that greater political and cultural complexity leads to not only greater TMT total compensation but also to a greater portion of TMT compensation linked with long-term performance. The arguments are supported in this essay by using an unbalanced panel dataset including 296 U.S. firms with 1,340 observations. ^ Essay 2 explored TMT social capital and its moderating role on value creation and appropriation by the chief executive officer (CEO). Using a sample with 548 U.S. firms and 2,010 observations, it found that greater TMT social capital does facilitate the effects of CEO intellectual capital and social capital on firm growth. Finally, essay 3 examined the performance implications for the fit between managerial information-processing capabilities and institutional complexity. It proposed that institutional complexity is associated with the needs of information-processing. On the other hand, smaller TMT turnover and larger TMT size reflect larger managerial information-processing capabilities. Consequently, superior performance is achieved by the match among institutional complexity, TMT turnover, and TMT size. All hypotheses in essay 3 are supported in a sample of 301 U.S. firms and 1,404 observations. ^ To conclude, this dissertation advances and extends our knowledge on the roles of institutional environments and top executives on firm performance and top executive compensation.^
Resumo:
Ensemble Stream Modeling and Data-cleaning are sensor information processing systems have different training and testing methods by which their goals are cross-validated. This research examines a mechanism, which seeks to extract novel patterns by generating ensembles from data. The main goal of label-less stream processing is to process the sensed events to eliminate the noises that are uncorrelated, and choose the most likely model without over fitting thus obtaining higher model confidence. Higher quality streams can be realized by combining many short streams into an ensemble which has the desired quality. The framework for the investigation is an existing data mining tool. First, to accommodate feature extraction such as a bush or natural forest-fire event we make an assumption of the burnt area (BA*), sensed ground truth as our target variable obtained from logs. Even though this is an obvious model choice the results are disappointing. The reasons for this are two: One, the histogram of fire activity is highly skewed. Two, the measured sensor parameters are highly correlated. Since using non descriptive features does not yield good results, we resort to temporal features. By doing so we carefully eliminate the averaging effects; the resulting histogram is more satisfactory and conceptual knowledge is learned from sensor streams. Second is the process of feature induction by cross-validating attributes with single or multi-target variables to minimize training error. We use F-measure score, which combines precision and accuracy to determine the false alarm rate of fire events. The multi-target data-cleaning trees use information purity of the target leaf-nodes to learn higher order features. A sensitive variance measure such as ƒ-test is performed during each node's split to select the best attribute. Ensemble stream model approach proved to improve when using complicated features with a simpler tree classifier. The ensemble framework for data-cleaning and the enhancements to quantify quality of fitness (30% spatial, 10% temporal, and 90% mobility reduction) of sensor led to the formation of streams for sensor-enabled applications. Which further motivates the novelty of stream quality labeling and its importance in solving vast amounts of real-time mobile streams generated today.
Resumo:
Bio-systems are inherently complex information processing systems. Furthermore, physiological complexities of biological systems limit the formation of a hypothesis in terms of behavior and the ability to test hypothesis. More importantly the identification and classification of mutation in patients are centric topics in today's cancer research. Next generation sequencing (NGS) technologies can provide genome-wide coverage at a single nucleotide resolution and at reasonable speed and cost. The unprecedented molecular characterization provided by NGS offers the potential for an individualized approach to treatment. These advances in cancer genomics have enabled scientists to interrogate cancer-specific genomic variants and compare them with the normal variants in the same patient. Analysis of this data provides a catalog of somatic variants, present in tumor genome but not in the normal tissue DNA. In this dissertation, we present a new computational framework to the problem of predicting the number of mutations on a chromosome for a certain patient, which is a fundamental problem in clinical and research fields. We begin this dissertation with the development of a framework system that is capable of utilizing published data from a longitudinal study of patients with acute myeloid leukemia (AML), who's DNA from both normal as well as malignant tissues was subjected to NGS analysis at various points in time. By processing the sequencing data at the time of cancer diagnosis using the components of our framework, we tested it by predicting the genomic regions to be mutated at the time of relapse and, later, by comparing our results with the actual regions that showed mutations (discovered at relapse time). We demonstrate that this coupling of the algorithm pipeline can drastically improve the predictive abilities of searching a reliable molecular signature. Arguably, the most important result of our research is its superior performance to other methods like Radial Basis Function Network, Sequential Minimal Optimization, and Gaussian Process. In the final part of this dissertation, we present a detailed significance, stability and statistical analysis of our model. A performance comparison of the results are presented. This work clearly lays a good foundation for future research for other types of cancer.^
Resumo:
The purpose of this study was to determine the effects of participating in an existing study skills course, developed for use with a general college population, on the study strategies and attitudes of college students with learning disabilities. This study further investigated whether there would be differential effectiveness for segregated and mainstreamed sections of the course.^ The sample consisted of 42 students with learning disabilities attending a southeastern university. Students were randomly assigned to either a segregated or mainstreamed section of the study skills course. In addition, a control group consisted of students with learning disabilities who received no study skills instruction.^ All subjects completed the Learning and Study Strategies Inventory (LASSI) before and after the study skills course. The subjects in the segregated group showed significant improvement on six of the 10 scales of the LASSI: Time Management, Concentration, Information Processing, Selecting Main Ideas, Study Aids, and Self Testing. Subjects in the mainstreamed section showed significant improvement on five scales: Anxiety, Selecting Main Ideas, Study Aids, Self Testing, and Test Strategies. The subjects in the control group did not significantly improve on any of the scales.^ This study showed that college students with learning disabilities improved their study strategies and attitudes by participating in a study skills course designed for a general student population. Further, these students benefitted whether by taking the course only with other students with learning disabilities, or by taking the course in a mixed group of students with or without learning disabilities. These results have important practical implications in that it appears that colleges can use existing study skills courses without having to develop special courses and schedules of course offerings targeted specifically for students with learning disabilities. ^