953 resultados para test-process features


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The subject of this thesis was the acquisition of difficult non-native vowels by speakers of two different languages. In order to study the subject, a group of Finnish speakers and another group of American English speakers were recruited and they underwent a short listen-and-repeat training that included as stimuli the semisynthetically created pseudowords /ty:ti/ and /tʉ:ti/. The aim was to study the effect of the training method on the subjects as well as the possible influence of the speakers’ native language on the process of acquisition. The selection of the target vowels /y/ and /ʉ/ was made according to the Speech Learning Model and Perceptual Assimilation Model, both of which predict that second language speech sounds that share similar features with sounds of a person’s native language are most difficult for the person to learn. The vowel /ʉ/ is similar to Finnish vowels as well as to vowels of English, whereas /y/ exists in Finnish but not in English, although it is similar to other English vowels. Therefore, it can be hypothesized that /ʉ/ is a difficult vowel for both groups to learn and /y/ is difficult for English speakers. The effect of training was tested with a pretest-training-posttest protocol in which the stimuli were played alternately and the subjects’ task was to repeat the heard stimuli. The training method was thought to improve the production of non-native sounds by engaging different feedback mechanisms, such as auditory and somatosensory. These, according to Template Theory, modify the production of speech by altering the motor commands from the internal speech system or the feedforward signal which translates the motoric commands into articulatory movements. The subjects’ productions during the test phases were recorded and an acoustic analysis was performed in which the formant values of the target vowels were extracted. Statistical analyses showed a statistically significant difference between groups in the first formant, signaling a possible effect of native motor commands. Furthermore, a statistically significant difference between groups was observed in the standard deviation of the formants in the production of /y/, showing the uniformity of native production. The training had no observable effect, possibly due to the short nature of the training protocol.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous research has highlighted the importance of positive physical activity (PA) behaviors during childhood to promote sustained active lifestyles throughout the lifespan (Telama et al. 2005; 2014). It is in this context that the role of schools and teachers in facilitating PA education is promoted. Research suggests that teachers play an important role in the attitudes of children towards PA (Figley 1985) and schools may be an efficient vehicle for PA provision and promotion (McGinnis, Kanner and DeGraw, 1991; Wechsler, Deveraux, Davis and Collins, 2000). Yet despite consensus that schools represent an ideal setting from which to ‘reach’ young people (Department of Health and Human Services, UK, 2012) there remains conceptual (e.g. multi-component intervention) and methodological (e.g. duration, intensity, family involvement) ambiguity regarding the mechanisms of change claimed by PA intervention programmes. This may, in part, contribute to research findings that suggest that PA interventions have had limited impact on children’s overall activity levels and thereby limited impact in reducing children’s metabolic health (Metcalf, Henley & Wilkin, 2012). A marked criticism of the health promotion field has been the focus on behavioural change while failing to acknowledge the impact of context in influencing health outcomes (Golden & Earp, 2011). For years, the trans-theoretical model of behaviour change has been ‘the dominant model for health behaviour change’ (Armitage, 2009); this model focusses primarily on the individual and the psychology of the change process. Arguably, this model is limited by the individual’s decision-making ability and degree of self-efficacy in order to achieve sustained behavioural change and does not take account of external factors that may hinder their ability to realise change. Similar to the trans-theoretical model, socio-ecological models identify the individual at the focal point of change but also emphasises the importance of connecting multiple impacting variables, in particular, the connections between the social environment, the physical environment and public policy in facilitating behavioural change (REF). In this research, a social-ecological framework was used to connect the ways a PA intervention programme had an impact (or not) on participants, and to make explicit the foundational features of the programme that facilitated positive change. In this study, we examined the evaluation of a multi-agency approach to a PA intervention programme which aimed to increase physical activity, and awareness of the importance of physical activity to key stage 2 (age 7-12) pupils in three UK primary schools. The agencies involved were the local health authority, a community based charitable organisation, a local health administrative agency, and the city school district. In examining the impact of the intervention, we adopted a process evaluation model in order to better understand the mechanisms and context that facilitated change. Therefore, the aim of this evaluation was to describe the provision, process and impact of the intervention by 1) assessing changes in physical activity levels 2) assessing changes in the student’s attitudes towards physical activity, 3) examining student’s perceptions of the child size fitness equipment in school and their likelihood of using the equipment outside of school and 4) exploring staff perceptions, specifically the challenges and benefits, of facilitating equipment based exercise sessions in the school environment. Methodology, Methods, Research Instruments or Sources Used Evaluation of the intervention was designed as a matched-control study and was undertaken over a seven-month period. The school-based intervention involved 3 intervention schools (n =436; 224 boys) and one control school (n=123; 70 boys) in a low socioeconomic and multicultural urban setting. The PA intervention was separated into two phases: a motivation DVD and 10 days of circuit based exercise sessions (Phase 1) followed by a maintenance phase (Phase 2) that incorporated a PA reward program and the use of specialist kid’s gym equipment located at each school for a period of 4 wk. Outcome measures were measured at baseline (January) and endpoint (July; end of academic school year) using reliable and valid self-report measures. The children’s attitudes towards PA were assessed using the Children’s Attitudes towards Physical Activity (CATPA) questionnaire. The Physical Activity Questionnaire for Children (PAQ-C), a 7-day recall questionnaire, was used to assess PA levels over a school week. A standardised test battery (Fitnessgram®) was used to assess cardiovascular fitness, body composition, muscular strength and endurance, and flexibility. After the 4 wk period, similar kid’s equipment was available for general access at local community facilities. The control school did not receive any of the interventions. All physical fitness tests and PA questionnaires were administered and collected prior to the start of the intervention (January) and following the intervention period (July) by an independent evaluation team. Evaluation testing took place at the individual schools over 2-3 consecutive days (depending on the number of children to be tested at the school). Staff (n=19) and student perceptions (n = 436) of the child sized fitness equipment were assessed via questionnaires post-intervention. Students completed a questionnaire to assess enjoyment, usage, ease of use and equipment assess and usage in the community. A questionnaire assessed staff perceptions on the delivery of the exercise sessions, classroom engagement and student perceptions. Conclusions, Expected Outcomes or Findings Findings showed that both the intervention (16.4%) and control groups increased their PAQ-C score by post-intervention (p < 0.05); with the intervention (17.8%) and control (21.3%) boys showing the greatest increase in physical activity levels. At post-intervention, there was a 5.5% decline in the intervention girls’ attitudes toward PA in the aesthetic subdomains (p = 0.009); whereas the control boys had an increase in positive attitudes in the health domain (p = 0.003). No significant differences in attitudes towards physical activity were observed in any other domain for either group at post-intervention (p > 0.05). The results of the equipment questionnaire, 96% of the children stated they enjoyed using the equipment and would like to use the equipment again in the future; however at post-intervention only 27% reported using the equipment outside of school in the last 7 days. Students identified the ski walker (34%) and cycle (32%) as their favorite pieces of equipment; with the single joint exercises such as leg extension and bicep/tricep machine (<3%) as their least favorite. Key themes from staff were that the equipment sessions were enjoyable, a novel activity, children felt very grown-up, and the activity was linked to a real fitness experience. They also expressed the need for more support to deliver the sessions and more time required for each session. Findings from this study suggest that a more integrated approach within the various agencies is required, particularly more support to increase teachers pedagogical content knowledge in physical activity instruction which is age appropriate. Future recommendations for successful implementation include sufficient time period for all students to access and engage with the equipment; increased access and marketing of facilities to parents within the local community, and professional teacher support strategies to facilitate the exercise sessions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dagens dataloggare har många funktioner vilket avspeglas i programvaran som används för att kommunicera med dem. De har fler funktioner än vad enskilda företag och privatpersoner behöver vilket gör programvaran onödigt komplicerad. Genom att minska antalet inställningsmöjligheter kan programvaran göras mindre, snabbare och lättare att lära sig. Arbetet utfördes hos Inventech Europe AB som tillhandahöll dataloggare för temperatur- och fuktighetsmätning. De ville undersöka möjligheterna att utveckla ett program som personer med begränsad datorvana snabbt kunde lära sig att använda. Därför var syftet med detta arbete att utreda hur ett sådant program kunde se ut. Arbetets fokus låg på designprocessen. Genom olika UML-diagram visualiserades de olika momenten i processen. Då projektet var relativt litet valdes en utvecklingsprocess som följer vattenfallsmodellen där de olika stegen (specifikation, design, implementation, test) utförs i följd. Det förutsätter att ett steg är färdigt innan nästa steg påbörjas. Modellen fungerar bäst när projektet är mindre och väldefinierat. Tyvärr ändrades företagets krav på hur programmet skulle fungera flera gånger under arbetets gång. Därmed borde en mer flexibel utvecklingsprocess ha valts för att ge utrymme för förändringar som kunde uppkomma under projektets gång. Slutresultatet blev en funktionsprototyp som var lätt att använda och inte hade fler inställningsmöjligheter än nödvändigt. Funktionsprototyp kan användas som bas för att lägga till egen skräddarsydd funktionalitet. För att visa detta inkluderades ytterligare två funktioner. En av funktionerna var möjligheten att kunna spara insamlad data till en extern databas som sedan kunde användas som källa till andra program vilka exempelvis skulle kunna visualisera data med hjälp av olika grafer. För att lätt kunna identifiera olika inkopplade dataloggare inkluderades även möjligheten att namnge de olika enheterna.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction. The IGF system has recently been shown to play an important role in the regulation of breast tumor cell proliferation. However, also breast density is currently considered as the strongest breast cancer risk factor. It is not yet clear whether these factors are interrelated and if and how they are influenced by menopausal status. The purpose of this study was to examine the possible effects of IGF-1 and IGFBP-3 and IGF-1/IGFBP-3 molar ratio on mammographic density stratified by menopausal status. Patients and methods. A group of 341 Italian women were interviewed to collect the following data: family history of breast cancer, reproductive and menstrual factors, breast biopsies, previous administration of hormonal contraceptive therapy, hormone replacement therapy (HRT) in menopause and lifestyle information. A blood sample was drawn for determination of IGF-1, IGFBP-3 levels. IGF-1/ IGFBP-3 molar ratio was then calculated. On the basis of recent mammograms the women were divided into two groups: dense breast (DB) and non-dense breast (NDB). Student’s t-test was employed to assess the association between breast density and plasma level of IGF-1, IGFBP-3 and molar ratio. To assess if this relationship was similar in subgroups of pre- and postmenopausal women, the study population was stratified by menopausal status and Student’s t-test was performed. Finally, multivariate analysis was employed to evaluate if there were confounding factors that might influence the relationship between growth factors and breast density. Results. The analysis of the relationship between mammographic density and plasma level of IGF-1, IGFBP-3 and IGF-1/ IGFBP-3 molar ratio showed that IGF-1 levels and molar ratio varied in the two groups resulting in higher mean values in the DB group (IGF-1: 109.6 versus 96.6 ng/ml; p= 0.001 and molar ratio 29.4 versus 25.5 ng/ml; p= 0.001) whereas IGFBP-3 showed similar values in both groups (DB and NDB). Analysis of plasma level of IGF-1, IGFBP-3 and IGF-1/IGFBP-3 molar ratio compared to breast density after stratification of the study population by menopausal status (premenopausal and postmenopausal) showed that there was no association between the plasma of growth factors and breast density, neither in premenopausal nor in postmenopausal patients. Multivariate analysis showed that only nulliparity, premenopausal status and body mass index (BMI) are determinants of breast density. Conclusions. Our study provides a strong evidence of a crude association between breast density and plasma levels of IGF-1 and molar ratio. On the basis of our results, it is reasonable to assume that the role of IGF-1 and molar ratio in the pathogenesis of breast cancer might be mediated through mammographic density. IGF-1 and molar ratio might thus increase the risk of cancer by increasing mammographic density.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: Stereopsis is the perception of depth based on retinal disparity. Global stereopsis depends on the process of random dot stimuli and local stereopsis depends on contour perception. The aim of this study was to correlate 3 stereopsis tests: TNO®, StereoTA B®, and Fly Stereo Acuity Test® and to study the sensitivity and correlation between them, using TNO® as the gold standard. Other variables as near convergence point, vergences, symptoms and optical correction were correlated with the 3 tests. Materials and Methods: Forty-nine students from Escola Superior de Tecnologia da Saúde de Lisboa (ESTeSL), aged 18-26 years old were included. Results: The stereopsis mean (standard-deviation-SD) values in each test were: TNO® = 87.04” ±84.09”; FlyTest® = 38.18” ±34.59”; StereoTA B® = 124.89’’ ±137.38’’. About the coefficient of determination: TNO® and StereoTA B® with R2 = 0.6 e TNO® and FlyTest® with R2 =0.2. Pearson correlation coefficient shows a positive correlation between TNO® and StereoTA B® (r = 0.784 with α = 0.01). Phi coefficient shows a strong and positive association between TNO® and StereoTA B® (Φ = 0.848 with α = 0.01). In the ROC Curve, the StereoTA B® has an area under the curve bigger than the FlyTest® with a sensivity of 92.3% for 94.4% of specificity, so it means that the test is sensitive with a good discriminative power. Conclusion: We conclude that the use of Stereopsis tests to study global Stereopsis are an asset for clinical use. This type of test is more sensitive, revealing changes in Stereopsis when it is actually changed, unlike the test Stereopsis, which often indicates normal Stereopsis, camouflaging a Stereopsis change. We noted also that the StereoTA B ® is very sensitive and despite being a digital application, possessed good correlation with the TNO®.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estereopsia define-se como a perceção de profundidade baseada na disparidade retiniana. A estereopsia global depende do processamento de estímulos de pontos aleatórios e a estereopsia local depende da perceção de contornos. O objetivo deste estudo é correlacionar três testes de estereopsia: TNO®, StereoTAB® e Fly Stereo Acuity Test® e verificar a sensibilidade e correlação entre eles, tendo o TNO® como gold standard. Incluíram-se 49 estudantes da Escola Superior de Tecnologia da Saúde de Lisboa (ESTeSL) entre os 18 e 26 anos. As variáveis ponto próximo de convergência (ppc), vergências, sintomatologia e correção ótica foram correlacionadas com os três testes. Os valores médios (desvios-padrão) de estereopsia foram: TNO® = 87,04’’ ±84,09’’; FlyTest® = 38,18’’ ±34,59’’; StereoTAB® = 124,89’’ ±137,38’’. Coeficiente de determinação: TNO® e StereoTAB® com R2=0,6 e TNO® e FlyTest® com R2=0,2. O coeficiente de correlação de Pearson mostra uma correlação positiva de entre o TNO® e o StereoTAB® (r=0,784 com α=0,01). O coeficiente de associação de Phi mostrou uma relação positiva forte entre o TNO® e StereoTAB® (Φ=0,848 com α=0,01). Na curva ROC, o StereoTAB® possui uma área sob a curva maior que o FlyTest®, apresentando valor de sensibilidade de 92,3% para uma especificidade de 94,4%, tornando-o num teste sensível e com bom poder discriminativo.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous research found personality test scores to be inflated on average among individuals who were motivated to present themselves in a desirable fashion in high stakes situations, such as during the employee selection process. One apparently effective way to reduce the undesirable test score inflation in such situations was to warn participants against faking. This research set out to investigate whether warning against faking would indeed affect personality test scores in the theoretically expected fashion. Contrary to expectations, the results did not support the hypothesized causal chain. Results across three studies show that while a warning may lower test scores in participants motivated to respond desirably (i.e., to fake), the effect of warning on test scores was not fully mediated by: a reduction in motivation to do well and self-reports of exaggerated responses in the personality test. Theoretical and practical implications are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern software application testing, such as the testing of software driven by graphical user interfaces (GUIs) or leveraging event-driven architectures in general, requires paying careful attention to context. Model-based testing (MBT) approaches first acquire a model of an application, then use the model to construct test cases covering relevant contexts. A major shortcoming of state-of-the-art automated model-based testing is that many test cases proposed by the model are not actually executable. These \textit{infeasible} test cases threaten the integrity of the entire model-based suite, and any coverage of contexts the suite aims to provide. In this research, I develop and evaluate a novel approach for classifying the feasibility of test cases. I identify a set of pertinent features for the classifier, and develop novel methods for extracting these features from the outputs of MBT tools. I use a supervised logistic regression approach to obtain a model of test case feasibility from a randomly selected training suite of test cases. I evaluate this approach with a set of experiments. The outcomes of this investigation are as follows: I confirm that infeasibility is prevalent in MBT, even for test suites designed to cover a relatively small number of unique contexts. I confirm that the frequency of infeasibility varies widely across applications. I develop and train a binary classifier for feasibility with average overall error, false positive, and false negative rates under 5\%. I find that unique event IDs are key features of the feasibility classifier, while model-specific event types are not. I construct three types of features from the event IDs associated with test cases, and evaluate the relative effectiveness of each within the classifier. To support this study, I also develop a number of tools and infrastructure components for scalable execution of automated jobs, which use state-of-the-art container and continuous integration technologies to enable parallel test execution and the persistence of all experimental artifacts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nonlinear thermo-mechanical properties of advanced polymers are crucial to accurate prediction of the process induced warpage and residual stress of electronics packages. The Fiber Bragg grating (FBG) sensor based method is advanced and implemented to determine temperature and time dependent nonlinear properties. The FBG sensor is embedded in the center of the cylindrical specimen, which deforms together with the specimen. The strains of the specimen at different loading conditions are monitored by the FBG sensor. Two main sources of the warpage are considered: curing induced warpage and coefficient of thermal expansion (CTE) mismatch induced warpage. The effective chemical shrinkage and the equilibrium modulus are needed for the curing induced warpage prediction. Considering various polymeric materials used in microelectronic packages, unique curing setups and procedures are developed for elastomers (extremely low modulus, medium viscosity, room temperature curing), underfill materials (medium modulus, low viscosity, high temperature curing), and epoxy molding compound (EMC: high modulus, high viscosity, high temperature pressure curing), most notably, (1) zero-constraint mold for elastomers; (2) a two-stage curing procedure for underfill materials and (3) an air-cylinder based novel setup for EMC. For the CTE mismatch induced warpage, the temperature dependent CTE and the comprehensive viscoelastic properties are measured. The cured cylindrical specimen with a FBG sensor embedded in the center is further used for viscoelastic property measurements. A uni-axial compressive loading is applied to the specimen to measure the time dependent Young’s modulus. The test is repeated from room temperature to the reflow temperature to capture the time-temperature dependent Young’s modulus. A separate high pressure system is developed for the bulk modulus measurement. The time temperature dependent bulk modulus is measured at the same temperatures as the Young’s modulus. The master curve of the Young’s modulus and bulk modulus of the EMC is created and a single set of the shift factors is determined from the time temperature superposition. The supplementary experiments are conducted to verify the validity of the assumptions associated with the linear viscoelasticity. The measured time-temperature dependent properties are further verified by a shadow moiré and Twyman/Green test.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There would appear to be varied approaches to the sales process practiced by SMEs in how they go about locating target customers, interfacing with prospects and new customers, presenting the benefits and features of their products and services, closing sales deals and building relationships, and an understanding of what the buyers needs are in the seller-buyer process. Recent research has revealed that while entrepreneurs and small business owners rely upon networking as an important source of sales, they lack marketing competencies, including personal selling skills and knowledge of what is involved in the sales process to close sales deals and build relationships. Small companies and start-ups with innovative products and services often find it difficult to persuade potential buyers of the merits of their offerings because, while the products and services may be excellent, they have not sufficiently well-developed selling skills necessary to persuade their target customers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Applications are subject of a continuous evolution process with a profound impact on their underlining data model, hence requiring frequent updates in the applications' class structure and database structure as well. This twofold problem, schema evolution and instance adaptation, usually known as database evolution, is addressed in this thesis. Additionally, we address concurrency and error recovery problems with a novel meta-model and its aspect-oriented implementation. Modern object-oriented databases provide features that help programmers deal with object persistence, as well as all related problems such as database evolution, concurrency and error handling. In most systems there are transparent mechanisms to address these problems, nonetheless the database evolution problem still requires some human intervention, which consumes much of programmers' and database administrators' work effort. Earlier research works have demonstrated that aspect-oriented programming (AOP) techniques enable the development of flexible and pluggable systems. In these earlier works, the schema evolution and the instance adaptation problems were addressed as database management concerns. However, none of this research was focused on orthogonal persistent systems. We argue that AOP techniques are well suited to address these problems in orthogonal persistent systems. Regarding the concurrency and error recovery, earlier research showed that only syntactic obliviousness between the base program and aspects is possible. Our meta-model and framework follow an aspect-oriented approach focused on the object-oriented orthogonal persistent context. The proposed meta-model is characterized by its simplicity in order to achieve efficient and transparent database evolution mechanisms. Our meta-model supports multiple versions of a class structure by applying a class versioning strategy. Thus, enabling bidirectional application compatibility among versions of each class structure. That is to say, the database structure can be updated because earlier applications continue to work, as well as later applications that have only known the updated class structure. The specific characteristics of orthogonal persistent systems, as well as a metadata enrichment strategy within the application's source code, complete the inception of the meta-model and have motivated our research work. To test the feasibility of the approach, a prototype was developed. Our prototype is a framework that mediates the interaction between applications and the database, providing them with orthogonal persistence mechanisms. These mechanisms are introduced into applications as an {\it aspect} in the aspect-oriented sense. Objects do not require the extension of any super class, the implementation of an interface nor contain a particular annotation. Parametric type classes are also correctly handled by our framework. However, classes that belong to the programming environment must not be handled as versionable due to restrictions imposed by the Java Virtual Machine. Regarding concurrency support, the framework provides the applications with a multithreaded environment which supports database transactions and error recovery. The framework keeps applications oblivious to the database evolution problem, as well as persistence. Programmers can update the applications' class structure because the framework will produce a new version for it at the database metadata layer. Using our XML based pointcut/advice constructs, the framework's instance adaptation mechanism is extended, hence keeping the framework also oblivious to this problem. The potential developing gains provided by the prototype were benchmarked. In our case study, the results confirm that mechanisms' transparency has positive repercussions on the programmer's productivity, simplifying the entire evolution process at application and database levels. The meta-model itself also was benchmarked in terms of complexity and agility. Compared with other meta-models, it requires less meta-object modifications in each schema evolution step. Other types of tests were carried out in order to validate prototype and meta-model robustness. In order to perform these tests, we used an OO7 small size database due to its data model complexity. Since the developed prototype offers some features that were not observed in other known systems, performance benchmarks were not possible. However, the developed benchmark is now available to perform future performance comparisons with equivalent systems. In order to test our approach in a real world scenario, we developed a proof-of-concept application. This application was developed without any persistence mechanisms. Using our framework and minor changes applied to the application's source code, we added these mechanisms. Furthermore, we tested the application in a schema evolution scenario. This real world experience using our framework showed that applications remains oblivious to persistence and database evolution. In this case study, our framework proved to be a useful tool for programmers and database administrators. Performance issues and the single Java Virtual Machine concurrent model are the major limitations found in the framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual recognition is a fundamental research topic in computer vision. This dissertation explores datasets, features, learning, and models used for visual recognition. In order to train visual models and evaluate different recognition algorithms, this dissertation develops an approach to collect object image datasets on web pages using an analysis of text around the image and of image appearance. This method exploits established online knowledge resources (Wikipedia pages for text; Flickr and Caltech data sets for images). The resources provide rich text and object appearance information. This dissertation describes results on two datasets. The first is Berg’s collection of 10 animal categories; on this dataset, we significantly outperform previous approaches. On an additional set of 5 categories, experimental results show the effectiveness of the method. Images are represented as features for visual recognition. This dissertation introduces a text-based image feature and demonstrates that it consistently improves performance on hard object classification problems. The feature is built using an auxiliary dataset of images annotated with tags, downloaded from the Internet. Image tags are noisy. The method obtains the text features of an unannotated image from the tags of its k-nearest neighbors in this auxiliary collection. A visual classifier presented with an object viewed under novel circumstances (say, a new viewing direction) must rely on its visual examples. This text feature may not change, because the auxiliary dataset likely contains a similar picture. While the tags associated with images are noisy, they are more stable when appearance changes. The performance of this feature is tested using PASCAL VOC 2006 and 2007 datasets. This feature performs well; it consistently improves the performance of visual object classifiers, and is particularly effective when the training dataset is small. With more and more collected training data, computational cost becomes a bottleneck, especially when training sophisticated classifiers such as kernelized SVM. This dissertation proposes a fast training algorithm called Stochastic Intersection Kernel Machine (SIKMA). This proposed training method will be useful for many vision problems, as it can produce a kernel classifier that is more accurate than a linear classifier, and can be trained on tens of thousands of examples in two minutes. It processes training examples one by one in a sequence, so memory cost is no longer the bottleneck to process large scale datasets. This dissertation applies this approach to train classifiers of Flickr groups with many group training examples. The resulting Flickr group prediction scores can be used to measure image similarity between two images. Experimental results on the Corel dataset and a PASCAL VOC dataset show the learned Flickr features perform better on image matching, retrieval, and classification than conventional visual features. Visual models are usually trained to best separate positive and negative training examples. However, when recognizing a large number of object categories, there may not be enough training examples for most objects, due to the intrinsic long-tailed distribution of objects in the real world. This dissertation proposes an approach to use comparative object similarity. The key insight is that, given a set of object categories which are similar and a set of categories which are dissimilar, a good object model should respond more strongly to examples from similar categories than to examples from dissimilar categories. This dissertation develops a regularized kernel machine algorithm to use this category dependent similarity regularization. Experiments on hundreds of categories show that our method can make significant improvement for categories with few or even no positive examples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the most significant research topics in computer vision is object detection. Most of the reported object detection results localise the detected object within a bounding box, but do not explicitly label the edge contours of the object. Since object contours provide a fundamental diagnostic of object shape, some researchers have initiated work on linear contour feature representations for object detection and localisation. However, linear contour feature-based localisation is highly dependent on the performance of linear contour detection within natural images, and this can be perturbed significantly by a cluttered background. In addition, the conventional approach to achieving rotation-invariant features is to rotate the feature receptive field to align with the local dominant orientation before computing the feature representation. Grid resampling after rotation adds extra computational cost and increases the total time consumption for computing the feature descriptor. Though it is not an expensive process if using current computers, it is appreciated that if each step of the implementation is faster to compute especially when the number of local features is increasing and the application is implemented on resource limited ”smart devices”, such as mobile phones, in real-time. Motivated by the above issues, a 2D object localisation system is proposed in this thesis that matches features of edge contour points, which is an alternative method that takes advantage of the shape information for object localisation. This is inspired by edge contour points comprising the basic components of shape contours. In addition, edge point detection is usually simpler to achieve than linear edge contour detection. Therefore, the proposed localization system could avoid the need for linear contour detection and reduce the pathological disruption from the image background. Moreover, since natural images usually comprise many more edge contour points than interest points (i.e. corner points), we also propose new methods to generate rotation-invariant local feature descriptors without pre-rotating the feature receptive field to improve the computational efficiency of the whole system. In detail, the 2D object localisation system is achieved by matching edge contour points features in a constrained search area based on the initial pose-estimate produced by a prior object detection process. The local feature descriptor obtains rotation invariance by making use of rotational symmetry of the hexagonal structure. Therefore, a set of local feature descriptors is proposed based on the hierarchically hexagonal grouping structure. Ultimately, the 2D object localisation system achieves a very promising performance based on matching the proposed features of edge contour points with the mean correct labelling rate of the edge contour points 0.8654 and the mean false labelling rate 0.0314 applied on the data from Amsterdam Library of Object Images (ALOI). Furthermore, the proposed descriptors are evaluated by comparing to the state-of-the-art descriptors and achieve competitive performances in terms of pose estimate with around half-pixel pose error.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Universities are institutions that generate and manipulate large amounts of data as a result of the multiple functions they perform, of the amount of involved professionals and students they attend. Information gathered from these data is used, for example, for operational activities and to support decision-making by managers. To assist managers in accomplishing their tasks, the Information Systems (IS) are presented as tools that offer features aiming to improve the performance of its users, assist with routine tasks and provide support to decision-making. The purpose of this research is to evaluate the influence of the users features and of the task in the success of IS. The study is of a descriptive-exploratory nature, therefore, the constructs used to define the conceptual model of the research are known and previously validated. However, individual features of users and of the task are IS success antecedents. In order to test the influence of these antecedents, it was developed a decision support IS that uses the Multicriteria Decision Aid Constructivist (MCDA-C) methodology with the participation and involvement of users. The sample consisted of managers and former managers of UTFPR Campus Pato Branco who work or have worked in teaching activities, research, extension and management. For data collection an experiment was conducted in the computer lab of the Campus Pato Branco in order to verify the hypotheses of the research. The experiment consisted of performing a distribution task of teaching positions between the academic departments using the IS developed. The task involved decision-making related to management activities. The data that fed the system used were real, from the Campus itself. A questionnaire was answered by the participants of the experiment in order to obtain data to verify the research hypotheses. The results obtained from the data analysis partially confirmed the influence of the individual features in IS success and fully confirmed the influence of task features. The data collected failed to support significant ratio between the individual features and the individual impact. For many of the participants the first contact with the IS was during the experiment, which indicates the lack of experience with the system. Regarding the success of IS, the data revealed that there is no significance in the relationship between Information Quality (IQ) and Individual Impact (II). It is noteworthy that the IS used in the experiment is to support decision-making and the information provided by this system are strictly quantitative, which may have caused some conflict in the analysis of the criteria involved in the decision-making process. This is because the criteria of teaching, research, extension and management are interconnected such that one reflects on another. Thus, the opinion of the managers does not depend exclusively on quantitative data, but also of knowledge and value judgment that each manager has about the problem to be solved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years the technological world has grown by incorporating billions of small sensing devices, collecting and sharing real-world information. As the number of such devices grows, it becomes increasingly difficult to manage all these new information sources. There is no uniform way to share, process and understand context information. In previous publications we discussed efficient ways to organize context information that is independent of structure and representation. However, our previous solution suffers from semantic sensitivity. In this paper we review semantic methods that can be used to minimize this issue, and propose an unsupervised semantic similarity solution that combines distributional profiles with public web services. Our solution was evaluated against Miller-Charles dataset, achieving a correlation of 0.6.