894 resultados para Context and activity Recognition
Resumo:
This thesis explores efforts to conjoin organisational contexts and capabilities in explaining sustainable competitive advantage. Oliver (1997) argued organisations need to balance the need to conform to industry’s requirements to attain legitimization (e.g. DiMaggio & Powell, 1983), and the need for resource optimization (e.g. Barney, 1991). The author hypothesized that such balance can be viewed as movements along the homogeneity-heterogeneity continuum. An organisation in a homogenous industry possesses similar characteristics as its competitors, as opposed to a heterogeneous industry in which organisations within are differentiated and competitively positioned (Oliver, 1997). The movement is influenced by the dynamic environmental conditions that an organisation is experiencing. The author extended Oliver’s (1997) propositions of combining RBV’s focus on capabilities with institutional theory’s focus on organisational context, as well as redefining organisational receptivity towards change (ORC) factors from Butler and Allen’s (2008) findings. The authors contributed to the theoretical development of ORC theory to explain the attainment of sustainable competitive advantage. ORC adopts the assumptions from both institutional and RBV theories, where the receptivity factors include both organisational contexts and capabilities. The thesis employed a mixed method approach in which sequential qualitative quantitative studies were deployed to establish a robust, reliable, and valid ORC scale. The adoption of Hinkin’s (1995) three-phase scale development process was updated, thus items generated from interviews and literature reviews went through numerous exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) to achieve convergent, discriminant, and nomological validities. Samples in the first phase (semi structured interviews) were hotel owners and managers. In the second phase, samples were MBA students, and employees of private and public sectors. In the third phase, samples were hotel managers. The final ORC scale is a parsimonious second higher-order latent construct. The first-order constructs comprises four latent receptivity factors which are ideological vision (4 items), leading change (4 items), implementation capacity (4 items), and change orientation (7 items). Hypotheses testing revealed that high levels of perceived environmental uncertainty leads to high levels of receptivity factor. Furthermore, the study found a strong positive correlation between receptivity factors and competitive advantage, and between receptivity factors and organisation performance. Mediation analyses revealed that receptivity factors partially mediate the relationship between perceived environmental uncertainty, competitive advantage and organisational performance.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
The research presented in this paper is part of an ongoing investigation into how best to incorporate speech-based input within mobile data collection applications. In our previous work [1], we evaluated the ability of a single speech recognition engine to support accurate, mobile, speech-based data input. Here, we build on our previous research to compare the achievable speaker-independent accuracy rates of a variety of speech recognition engines; we also consider the relative effectiveness of different speech recognition engine and microphone pairings in terms of their ability to support accurate text entry under realistic mobile conditions of use. Our intent is to provide some initial empirical data derived from mobile, user-based evaluations to support technological decisions faced by developers of mobile applications that would benefit from, or require, speech-based data entry facilities.
Resumo:
We summarize the various strands of research on peripheral vision and relate them to theories of form perception. After a historical overview, we describe quantifications of the cortical magnification hypothesis, including an extension of Schwartz's cortical mapping function. The merits of this concept are considered across a wide range of psychophysical tasks, followed by a discussion of its limitations and the need for non-spatial scaling. We also review the eccentricity dependence of other low-level functions including reaction time, temporal resolution, and spatial summation, as well as perimetric methods. A central topic is then the recognition of characters in peripheral vision, both at low and high levels of contrast, and the impact of surrounding contours known as crowding. We demonstrate how Bouma's law, specifying the critical distance for the onset of crowding, can be stated in terms of the retinocortical mapping. The recognition of more complex stimuli, like textures, faces, and scenes, reveals a substantial impact of mid-level vision and cognitive factors. We further consider eccentricity-dependent limitations of learning, both at the level of perceptual learning and pattern category learning. Generic limitations of extrafoveal vision are observed for the latter in categorization tasks involving multiple stimulus classes. Finally, models of peripheral form vision are discussed. We report that peripheral vision is limited with regard to pattern categorization by a distinctly lower representational complexity and processing speed. Taken together, the limitations of cognitive processing in peripheral vision appear to be as significant as those imposed on low-level functions and by way of crowding.
Resumo:
Authors suggested earlier hierarchical method for definition of class description at pattern recognition problems solution. In this paper development and use of such hierarchical descriptions for parallel representation of complex patterns on the base of multi-core computers or neural networks is proposed.
Resumo:
Studies of framing in the EU political system are still a rarity and they suffer from a lack of systematic empirical analysis. Addressing this gap, we ask if institutional and policy contexts intertwined with the strategic side of framing can explain the number and types of frames employed by different stakeholders. We use a computer-assisted manual content analysis and develop a fourfold typology of frames to study the frames that were prevalent in the debates on four EU policy proposals within financial market regulation and environmental policy at the EU level and in Germany, Sweden, the Netherlands and the United Kingdom. The main empirical finding is that both contexts and strategies exert a significant impact on the number and types of frames in EU policy debates. In conceptual terms, the article contributes to developing more fine-grained tools for studying frames and their underlying dimensions.
Resumo:
Redox regulation of signalling pathways is critical in proliferation and apoptosis; redox imbalance can lead to pathologies such as inflammation and cancer. Vaccinia H1-related protein (VHR; DUSP3) is a dual-specificity phosphatase important in controlling MAP kinase activity during cell cycle. the active-site motif contains a cysteine that acts as a nucleophile during catalysis. We used VHR to investigate the effect of oxidation in vitro on phosphatase activity, with the aim of determining how the profile of site-specific modification related to catalytic activity. Recombinant human VHR was expressed in E. coli and purified using a GST-tag. Protein was subjected to oxidation with various concentrations of SIN-1 or tetranitromethane (TNM) as nitrating agents, or HOCl. the activity was assayed using either 3-O-methylfluorescein phosphate with fluorescence detection or PIP3 by phosphate release with malachite green. the sites of oxidation were mapped using HPLC coupled to tandem mass spectrometry on an ABSciex 5600TripleTOF following in-gel digestion. More than 25 different concentration-dependent oxidative modifications to the protein were detected, including oxidations of methionine, cysteine, histidine, lysine, proline and tyrosine, and the % oxidized peptide (versus unmodified peptide) was determined from the extracted ion chromatograms. Unsurprisingly, methionine residues were very susceptible to oxidation, but there was a significant different in the extent of their oxidation. Similarly, tyrosine residues varied greatly in their modifications: Y85 and Y138 were readily nitrated, whereas Y38, Y78 and Y101 showed little modification. Y138 must be phosphorylated for MAPK phosphatase activity, so this susceptibility impacts on signalling pathways. Di- and tri- oxidations of cysteine residues were observed, but did not correlate directly with loss of activity. Overall, the catalytic activity did not correlate with redox state of any individual residue, but the total oxidative load correlated with treatment concentration and activity. This study provides the first comprehensive analysis of oxidation modifications of VHR, and demonstrates both heterogenous oxidant effects and differential residue susceptibility in a signalling phosphatase.
Resumo:
In the visual perception literature, the recognition of faces has often been contrasted with that of non-face objects, in terms of differences with regard to the role of parts, part relations and holistic processing. However, recent evidence from developmental studies has begun to blur this sharp distinction. We review evidence for a protracted development of object recognition that is reminiscent of the well-documented slow maturation observed for faces. The prolonged development manifests itself in a retarded processing of metric part relations as opposed to that of individual parts and offers surprising parallels to developmental accounts of face recognition, even though the interpretation of the data is less clear with regard to holistic processing. We conclude that such results might indicate functional commonalities between the mechanisms underlying the recognition of faces and non-face objects, which are modulated by different task requirements in the two stimulus domains.
Resumo:
Biometrics is afield of study which pursues the association of a person's identity with his/her physiological or behavioral characteristics.^ As one aspect of biometrics, face recognition has attracted special attention because it is a natural and noninvasive means to identify individuals. Most of the previous studies in face recognition are based on two-dimensional (2D) intensity images. Face recognition based on 2D intensity images, however, is sensitive to environment illumination and subject orientation changes, affecting the recognition results. With the development of three-dimensional (3D) scanners, 3D face recognition is being explored as an alternative to the traditional 2D methods for face recognition.^ This dissertation proposes a method in which the expression and the identity of a face are determined in an integrated fashion from 3D scans. In this framework, there is a front end expression recognition module which sorts the incoming 3D face according to the expression detected in the 3D scans. Then, scans with neutral expressions are processed by a corresponding 3D neutral face recognition module. Alternatively, if a scan displays a non-neutral expression, e.g., a smiling expression, it will be routed to an appropriate specialized recognition module for smiling face recognition.^ The expression recognition method proposed in this dissertation is innovative in that it uses information from 3D scans to perform the classification task. A smiling face recognition module was developed, based on the statistical modeling of the variance between faces with neutral expression and faces with a smiling expression.^ The proposed expression and face recognition framework was tested with a database containing 120 3D scans from 30 subjects (Half are neutral faces and half are smiling faces). It is shown that the proposed framework achieves a recognition rate 10% higher than attempting the identification with only the neutral face recognition module.^
Resumo:
The move from Standard Definition (SD) to High Definition (HD) represents a six times increases in data, which needs to be processed. With expanding resolutions and evolving compression, there is a need for high performance with flexible architectures to allow for quick upgrade ability. The technology advances in image display resolutions, advanced compression techniques, and video intelligence. Software implementation of these systems can attain accuracy with tradeoffs among processing performance (to achieve specified frame rates, working on large image data sets), power and cost constraints. There is a need for new architectures to be in pace with the fast innovations in video and imaging. It contains dedicated hardware implementation of the pixel and frame rate processes on Field Programmable Gate Array (FPGA) to achieve the real-time performance. ^ The following outlines the contributions of the dissertation. (1) We develop a target detection system by applying a novel running average mean threshold (RAMT) approach to globalize the threshold required for background subtraction. This approach adapts the threshold automatically to different environments (indoor and outdoor) and different targets (humans and vehicles). For low power consumption and better performance, we design the complete system on FPGA. (2) We introduce a safe distance factor and develop an algorithm for occlusion occurrence detection during target tracking. A novel mean-threshold is calculated by motion-position analysis. (3) A new strategy for gesture recognition is developed using Combinational Neural Networks (CNN) based on a tree structure. Analysis of the method is done on American Sign Language (ASL) gestures. We introduce novel point of interests approach to reduce the feature vector size and gradient threshold approach for accurate classification. (4) We design a gesture recognition system using a hardware/ software co-simulation neural network for high speed and low memory storage requirements provided by the FPGA. We develop an innovative maximum distant algorithm which uses only 0.39% of the image as the feature vector to train and test the system design. Database set gestures involved in different applications may vary. Therefore, it is highly essential to keep the feature vector as low as possible while maintaining the same accuracy and performance^
Resumo:
Hydrophobicity as measured by Log P is an important molecular property related to toxicity and carcinogenicity. With increasing public health concerns for the effects of Disinfection By-Products (DBPs), there are considerable benefits in developing Quantitative Structure and Activity Relationship (QSAR) models capable of accurately predicting Log P. In this research, Log P values of 173 DBP compounds in 6 functional classes were used to develop QSAR models, by applying 3 molecular descriptors, namely, Energy of the Lowest Unoccupied Molecular Orbital (ELUMO), Number of Chlorine (NCl) and Number of Carbon (NC) by Multiple Linear Regression (MLR) analysis. The QSAR models developed were validated based on the Organization for Economic Co-operation and Development (OECD) principles. The model Applicability Domain (AD) and mechanistic interpretation were explored. Considering the very complex nature of DBPs, the established QSAR models performed very well with respect to goodness-of-fit, robustness and predictability. The predicted values of Log P of DBPs by the QSAR models were found to be significant with a correlation coefficient R2 from 81% to 98%. The Leverage Approach by Williams Plot was applied to detect and remove outliers, consequently increasing R 2 by approximately 2% to 13% for different DBP classes. The developed QSAR models were statistically validated for their predictive power by the Leave-One-Out (LOO) and Leave-Many-Out (LMO) cross validation methods. Finally, Monte Carlo simulation was used to assess the variations and inherent uncertainties in the QSAR models of Log P and determine the most influential parameters in connection with Log P prediction. The developed QSAR models in this dissertation will have a broad applicability domain because the research data set covered six out of eight common DBP classes, including halogenated alkane, halogenated alkene, halogenated aromatic, halogenated aldehyde, halogenated ketone, and halogenated carboxylic acid, which have been brought to the attention of regulatory agencies in recent years. Furthermore, the QSAR models are suitable to be used for prediction of similar DBP compounds within the same applicability domain. The selection and integration of various methodologies developed in this research may also benefit future research in similar fields.
Resumo:
Quantitative Structure-Activity Relationship (QSAR) has been applied extensively in predicting toxicity of Disinfection By-Products (DBPs) in drinking water. Among many toxicological properties, acute and chronic toxicities of DBPs have been widely used in health risk assessment of DBPs. These toxicities are correlated with molecular properties, which are usually correlated with molecular descriptors. The primary goals of this thesis are: (1) to investigate the effects of molecular descriptors (e.g., chlorine number) on molecular properties such as energy of the lowest unoccupied molecular orbital (E LUMO) via QSAR modelling and analysis; (2) to validate the models by using internal and external cross-validation techniques; (3) to quantify the model uncertainties through Taylor and Monte Carlo Simulation. One of the very important ways to predict molecular properties such as ELUMO is using QSAR analysis. In this study, number of chlorine (NCl ) and number of carbon (NC) as well as energy of the highest occupied molecular orbital (EHOMO) are used as molecular descriptors. There are typically three approaches used in QSAR model development: (1) Linear or Multi-linear Regression (MLR); (2) Partial Least Squares (PLS); and (3) Principle Component Regression (PCR). In QSAR analysis, a very critical step is model validation after QSAR models are established and before applying them to toxicity prediction. The DBPs to be studied include five chemical classes: chlorinated alkanes, alkenes, and aromatics. In addition, validated QSARs are developed to describe the toxicity of selected groups (i.e., chloro-alkane and aromatic compounds with a nitro- or cyano group) of DBP chemicals to three types of organisms (e.g., Fish, T. pyriformis, and P.pyosphoreum) based on experimental toxicity data from the literature. The results show that: (1) QSAR models to predict molecular property built by MLR, PLS or PCR can be used either to select valid data points or to eliminate outliers; (2) The Leave-One-Out Cross-Validation procedure by itself is not enough to give a reliable representation of the predictive ability of the QSAR models, however, Leave-Many-Out/K-fold cross-validation and external validation can be applied together to achieve more reliable results; (3) E LUMO are shown to correlate highly with the NCl for several classes of DBPs; and (4) According to uncertainty analysis using Taylor method, the uncertainty of QSAR models is contributed mostly from NCl for all DBP classes.
Resumo:
Present theories of deep-sea community organization recognize the importance of small-scale biological disturbances, originated partly from the activities of epibenthic megafaunal organisms, in maintaining high benthic biodiversity in the deep sea. However, due to technical difficulties, in situ experimental studies to test hypotheses in the deep sea are lacking. The objective of the present study was to evaluate the potential of cages as tools for studying the importance of epibenthic megafauna for deep-sea benthic communities. Using the deep-diving Remotely Operated Vehicle (ROV) "VICTOR 6000", six experimental cages were deployed at the sea floor at 2500 m water depth and sampled after 2 years (2y) and 4 years (4y) for a variety of sediment parameters in order to test for caging artefacts. Photo and video footage from both experiments showed that the cages were efficient at excluding the targeted fauna. The cage also proved to be appropriate to deep-sea studies considering the fact that there was no fouling on the cages and no evidence of any organism establishing residence on or adjacent to it. Environmental changes inside the cages were dependent on the experimental period analysed. In the 4y experiment, chlorophyll a concentrations were higher in the uppermost centimeter of sediment inside cages whereas in the 2y experiment, it did not differ between inside and outside. Although the cages caused some changes to the sedimentary regime, they are relatively minor compared to similar studies in shallow water. The only parameter that was significantly higher under cages at both experiments was the concentration of phaeopigments. Since the epibenthic megafauna at our study site can potentially affect phytodetritus distribution and availability at the seafloor (e.g. via consumption, disaggregation and burial), we suggest that their exclusion was, at least in part, responsible for the increases in pigment concentrations. Cages might be suitable tools to study the long-term effects of disturbances caused by megafaunal organisms on the diversity and community structure of smaller-sized organisms in the deep sea, although further work employing partial cage controls, greater replication, and evaluating faunal components will be essential to unequivocally establish their utility.