902 resultados para Large Data Sets


Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has long been thought that tropical rainfall retrievals from satellites have large errors. Here we show, using a new daily 1 degree gridded rainfall data set based on about 1800 gauges from the India Meteorology Department (IMD), that modern satellite estimates are reasonably close to observed rainfall over the Indian monsoon region. Daily satellite rainfalls from the Global Precipitation Climatology Project (GPCP 1DD) and the Tropical Rainfall Measuring Mission (TRMM) Multisatellite Precipitation Analysis (TMPA) are available since 1998. The high summer monsoon (June-September) rain over the Western Ghats and Himalayan foothills is captured in TMPA data. Away from hilly regions, the seasonal mean and intraseasonal variability of rainfall (averaged over regions of a few hundred kilometers linear dimension) from both satellite products are about 15% of observations. Satellite data generally underestimate both the mean and variability of rain, but the phase of intraseasonal variations is accurate. On synoptic timescales, TMPA gives reasonable depiction of the pattern and intensity of torrential rain from individual monsoon low-pressure systems and depressions. A pronounced biennial oscillation of seasonal total central India rain is seen in all three data sets, with GPCP 1DD being closest to IMD observations. The new satellite data are a promising resource for the study of tropical rainfall variability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Changes in alcohol pricing have been documented as inversely associated with changes in consumption and alcohol-related problems. Evidence of the association between price changes and health problems is nevertheless patchy and is based to a large extent on cross-sectional state-level data, or time series of such cross-sectional analyses. Natural experimental studies have been called for. There was a substantial reduction in the price of alcohol in Finland in 2004 due to a reduction in alcohol taxes of one third, on average, and the abolition of duty-free allowances for travellers from the EU. These changes in the Finnish alcohol policy could be considered a natural experiment, which offered a good opportunity to study what happens with regard to alcohol-related problems when prices go down. The present study investigated the effects of this reduction in alcohol prices on (1) alcohol-related and all-cause mortality, and mortality due to cardiovascular diseases, (2) alcohol-related morbidity in terms of hospitalisation, (3) socioeconomic differentials in alcohol-related mortality, and (4) small-area differences in interpersonal violence in the Helsinki Metropolitan area. Differential trends in alcohol-related mortality prior to the price reduction were also analysed. A variety of population-based register data was used in the study. Time-series intervention analysis modelling was applied to monthly aggregations of deaths and hospitalisation for the period 1996-2006. These and other mortality analyses were carried out for men and women aged 15 years and over. Socioeconomic differentials in alcohol-related mortality were assessed on a before/after basis, mortality being followed up in 2001-2003 (before the price reduction) and 2004-2005 (after). Alcohol-related mortality was defined in all the studies on mortality on the basis of information on both underlying and contributory causes of death. Hospitalisation related to alcohol meant that there was a reference to alcohol in the primary diagnosis. Data on interpersonal violence was gathered from 86 administrative small-areas in the Helsinki Metropolitan area and was also assessed on a before/after basis followed up in 2002-2003 and 2004-2005. The statistical methods employed to analyse these data sets included time-series analysis, and Poisson and linear regression. The results of the study indicate that alcohol-related deaths increased substantially among men aged 40-69 years and among women aged 50-69 after the price reduction when trends and seasonal variation were taken into account. The increase was mainly attributable to chronic causes, particularly liver diseases. Mortality due to cardiovascular diseases and all-cause mortality, on the other hand, decreased considerably among the-over-69-year-olds. The increase in alcohol-related mortality in absolute terms among the 30-59-year-olds was largest among the unemployed and early-age pensioners, and those with a low level of education, social class or income. The relative differences in change between the education and social class subgroups were small. The employed and those under the age of 35 did not suffer from increased alcohol-related mortality in the two years following the price reduction. The gap between the age and education groups, which was substantial in the 1980s, thus further broadened. With regard to alcohol-related hospitalisation, there was an increase in both chronic and acute causes among men under the age of 70, and among women in the 50-69-year age group when trends and seasonal variation were taken into account. Alcohol dependence and other alcohol-related mental and behavioural disorders were the largest category in both the total number of chronic hospitalisation and in the increase. There was no increase in the rate of interpersonal violence in the Helsinki Metropolitan area, and even a decrease in domestic violence. There was a significant relationship between the measures of social disadvantage on the area level and interpersonal violence, although the differences in the effects of the price reduction between the different areas were small. The findings of the present study suggest that that a reduction in alcohol prices may lead to a substantial increase in alcohol-related mortality and morbidity. However, large population group differences were observed regarding responsiveness to the price changes. In particular, the less privileged, such as the unemployed, were most sensitive. In contrast, at least in the Finnish context, the younger generations and the employed do not appear to be adversely affected, and those in the older age groups may even benefit from cheaper alcohol in terms of decreased rates of CVD mortality. The results also suggest that reductions in alcohol prices do not necessarily affect interpersonal violence. The population group differences in the effects of the price changes on alcohol-related harm should be acknowledged, and therefore the policy actions should focus on the population subgroups that are primarily responsive to the price reduction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a randomized algorithm for large scale SVM learning which solves the problem by iterating over random subsets of the data. Crucial to the algorithm for scalability is the size of the subsets chosen. In the context of text classification we show that, by using ideas from random projections, a sample size of O(log n) can be used to obtain a solution which is close to the optimal with a high probability. Experiments done on synthetic and real life data sets demonstrate that the algorithm scales up SVM learners, without loss in accuracy. 1

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite significant advances in recent years, structure-from-motion (SfM) pipelines suffer from two important drawbacks. Apart from requiring significant computational power to solve the large-scale computations involved, such pipelines sometimes fail to correctly reconstruct when the accumulated error in incremental reconstruction is large or when the number of 3D to 2D correspondences are insufficient. In this paper we present a novel approach to mitigate the above-mentioned drawbacks. Using an image match graph based on matching features we partition the image data set into smaller sets or components which are reconstructed independently. Following such reconstructions we utilise the available epipolar relationships that connect images across components to correctly align the individual reconstructions in a global frame of reference. This results in both a significant speed up of at least one order of magnitude and also mitigates the problems of reconstruction failures with a marginal loss in accuracy. The effectiveness of our approach is demonstrated on some large-scale real world data sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have estimated a metallicity map of the Large Magellanic Cloud (LMC) using the Magellanic Cloud Photometric Survey (MCPS) and Optical Gravitational Lensing Experiment (OGLE III) photometric data. This is a first of its kind map of metallicity up to a radius of 4 degrees-5 degrees, derived using photometric data and calibrated using spectroscopic data of Red Giant Branch (RGB) stars. We identify the RGB in the V, (V - I) colour-magnitude diagrams of small subregions of varying sizes in both data sets. We use the slope of the RGB as an indicator of the average metallicity of a subregion, and calibrate the RGB slope to metallicity using spectroscopic data for field and cluster red giants in selected subregions. The average metallicity of the LMC is found to be Fe/H] = -0.37 dex (sigmaFe/H] = 0.12) from MCPS data, and Fe/H] = -0.39 dex (sigmaFe/H] = 0.10) from OGLE III data. The bar is found to be the most metal-rich region of the LMC. Both the data sets suggest a shallow radial metallicity gradient up to a radius of 4 kpc (-0.049 +/- 0.002 dex kpc(-1) to -0.066 +/- 0.006 dex kpc(-1)). Subregions in which the mean metallicity differs from the surrounding areas do not appear to correlate with previously known features; spectroscopic studies are required in order to assess their physical significance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The possibilities of digital research have altered the production, publication and use of research results. Academic research practice and culture are changing or have already been transformed, but to a large degree the system of academic recognition has not yet adapted to the practices and possibilities of digital research. This applies especially to research data, which are increasingly produced, managed, published and archived, but play hardly a role yet in practices of research assessment. The aim of the workshop was to bring experts and stakeholders from research institutions, universities, scholarly societies and funding agencies together in order to review, discuss and build on possibilities to implement the culture of sharing and to integrate publication of data into research assessment procedures. The report 'The Value of Research Data - Metrics for datasets from a cultural and technical point of view' was presented and discussed. Some of the key finding were that data sharing should be considered normal research practice, in fact not sharing should be considered malpractice. Research funders and universities should support and encourage data sharing. There are a number of important aspects to consider when making data count in research and evaluation procedures. Metrics are a necessary tool in monitoring the sharing of data sets. However, data metrics are at present not very well developed and there is not yet enough experience in what these metrics actually mean. It is important to implement the culture of sharing through codes of conducts in the scientific communities. For further key findings please read the report.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the first part of the thesis we explore three fundamental questions that arise naturally when we conceive a machine learning scenario where the training and test distributions can differ. Contrary to conventional wisdom, we show that in fact mismatched training and test distribution can yield better out-of-sample performance. This optimal performance can be obtained by training with the dual distribution. This optimal training distribution depends on the test distribution set by the problem, but not on the target function that we want to learn. We show how to obtain this distribution in both discrete and continuous input spaces, as well as how to approximate it in a practical scenario. Benefits of using this distribution are exemplified in both synthetic and real data sets.

In order to apply the dual distribution in the supervised learning scenario where the training data set is fixed, it is necessary to use weights to make the sample appear as if it came from the dual distribution. We explore the negative effect that weighting a sample can have. The theoretical decomposition of the use of weights regarding its effect on the out-of-sample error is easy to understand but not actionable in practice, as the quantities involved cannot be computed. Hence, we propose the Targeted Weighting algorithm that determines if, for a given set of weights, the out-of-sample performance will improve or not in a practical setting. This is necessary as the setting assumes there are no labeled points distributed according to the test distribution, only unlabeled samples.

Finally, we propose a new class of matching algorithms that can be used to match the training set to a desired distribution, such as the dual distribution (or the test distribution). These algorithms can be applied to very large datasets, and we show how they lead to improved performance in a large real dataset such as the Netflix dataset. Their computational complexity is the main reason for their advantage over previous algorithms proposed in the covariate shift literature.

In the second part of the thesis we apply Machine Learning to the problem of behavior recognition. We develop a specific behavior classifier to study fly aggression, and we develop a system that allows analyzing behavior in videos of animals, with minimal supervision. The system, which we call CUBA (Caltech Unsupervised Behavior Analysis), allows detecting movemes, actions, and stories from time series describing the position of animals in videos. The method summarizes the data, as well as it provides biologists with a mathematical tool to test new hypotheses. Other benefits of CUBA include finding classifiers for specific behaviors without the need for annotation, as well as providing means to discriminate groups of animals, for example, according to their genetic line.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

EXTRACT (SEE PDF FOR FULL ABSTRACT): This is a previous presentation of what has been observed in points spread in Mexico. The existing data amount is large enough that an atlas was given out in 1977. This atlas has information which goes back to the beginning of the country. The original data sets from which this atlas was issued exist in a variety of storage forms ranging from simple paper blocks up to books and magnetic tapes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the face of dramatic declines in groundfish populations and a lack of sufficient stock assessment information, a need has arisen for new methods of assessing groundfish populations. We describe the integration of seafloor transect data gathered by a manned submersible with high-resolution sonar imagery to produce a habitat-based stock assessment system for groundfish. The data sets used in this study were collected from Heceta Bank, Oregon, and were derived from 42 submersible dives (1988–90) and a multibeam sonar survey (1998). The submersible habitat survey investigated seafloor topography and groundfish abundance along 30-minute transects over six predetermined stations and found a statistical relationship between habitat variability and groundfish distribution and abundance. These transects were analyzed in a geographic information system (GIS) by using dynamic segmentation to display changes in habitat along the transects. We used the submersible data to extrapolate fish abundance within uniform habitat patches over broad areas of the bank by means of a habitat classification based on the sonar imagery. After applying a navigation correction to the submersible-based habitat segments, a good correlation with major boundaries on the backscatter and topographic boundaries on the imagery were apparent. Extrapolation of the extent of uniform habitats was made in the vicinity of the dive stations and a preliminary stock assessment of several species of demersal fish was calculated. Such a habitat-based approach will allow researchers to characterize marine communities over large areas of the seafloor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The data streaming model provides an attractive framework for one-pass summarization of massive data sets at a single observation point. However, in an environment where multiple data streams arrive at a set of distributed observation points, sketches must be computed remotely and then must be aggregated through a hierarchy before queries may be conducted. As a result, many sketch-based methods for the single stream case do not apply directly, as either the error introduced becomes large, or because the methods assume that the streams are non-overlapping. These limitations hinder the application of these techniques to practical problems in network traffic monitoring and aggregation in sensor networks. To address this, we develop a general framework for evaluating and enabling robust computation of duplicate-sensitive aggregate functions (e.g., SUM and QUANTILE), over data produced by distributed sources. We instantiate our approach by augmenting the Count-Min and Quantile-Digest sketches to apply in this distributed setting, and analyze their performance. We conclude with experimental evaluation to validate our analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Phytoplankton cell size influences particle sinking rate, food web interactions and biogeographical distributions. We present a model in which the uptake, storage and assimilation of nitrogen and carbon are explicitly resolved in different-sized phytoplankton cells. In the model, metabolism and cellular C :N ratio are influenced by the accumulation of carbon polymers such as carbohydrate and lipid, which is greatest when cells are nutrient starved, or exposed to high light. Allometric relations and empirical data sets are used to constrain the range of possible C : N, and indicate that larger cells can accumulate significantly more carbon storage compounds than smaller cells. When forced with extended periods of darkness combined with brief exposure to saturating irradiance, the model predicts organisms large enough to accumulate significant carbon reserves may on average synthesize protein and other functional apparatus up to five times faster than smaller organisms. The advantage of storage in terms of average daily protein synthesis rate is greatest when modeled organisms were previously nutrient starved, and carbon storage reservoirs saturated. Small organisms may therefore be at a disadvantage in terms of average daily growth rate in environments that involve prolonged periods of darkness and intermittent nutrient limitation. We suggest this mechanism is a significant constraint on phytoplankton C :N variability and cell size distribution in different oceanic regimes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many of the most interesting questions ecologists ask lead to analyses of spatial data. Yet, perhaps confused by the large number of statistical models and fitting methods available, many ecologists seem to believe this is best left to specialists. Here, we describe the issues that need consideration when analysing spatial data and illustrate these using simulation studies. Our comparative analysis involves using methods including generalized least squares, spatial filters, wavelet revised models, conditional autoregressive models and generalized additive mixed models to estimate regression coefficients from synthetic but realistic data sets, including some which violate standard regression assumptions. We assess the performance of each method using two measures and using statistical error rates for model selection. Methods that performed well included generalized least squares family of models and a Bayesian implementation of the conditional auto-regressive model. Ordinary least squares also performed adequately in the absence of model selection, but had poorly controlled Type I error rates and so did not show the improvements in performance under model selection when using the above methods. Removing large-scale spatial trends in the response led to poor performance. These are empirical results; hence extrapolation of these findings to other situations should be performed cautiously. Nevertheless, our simulation-based approach provides much stronger evidence for comparative analysis than assessments based on single or small numbers of data sets, and should be considered a necessary foundation for statements of this type in future.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

1) Executive Summary
Legislation (Autism Act NI, 2011), a cross-departmental strategy (Autism Strategy 2013-2020) and a first action plan (2013-2016) have been developed in Northern Ireland in order to support individuals and families affected by Autism Spectrum Disorder (ASD) without a prior thorough baseline assessment of need. At the same time, there are large existing data sets about the population in NI that had never been subjected to a secondary data analysis with regards to data on ASD. This report covers the first comprehensive secondary data analysis and thereby aims to inform future policy and practice.
Following a search of all existing, large-scale, regional or national data sets that were relevant to the lives of individuals and families affected by Autism Spectrum Disorder (ASD) in Northern Ireland, extensive secondary data analyses were carried out. The focus of these secondary data analyses was to distill any ASD related data from larger generic data sets. The findings are reported for each data set and follow a lifespan perspective, i.e., data related to children is reported first before data related to adults.
Key findings:
Autism Prevalence:
Of children born in 2000 in the UK,
• 0.9% (1:109) were reported to have ASD, when they were 5-year old in 2005;
• 1.8% (1:55) were reported to have ASD, when they were 7-years old in 2007;
• 3.5% (1:29) were reported to have ASD, when they were 11-year old in 2011.
In mainstream schools in Northern Ireland
• 1.2% of the children were reported to have ASD in 2006/07;
• 1.8% of the children were reported to have ASD in 2012/13.

Economic Deprivation:
• Families of children with autism (CWA) were 9%-18% worse off per week than families of children not on the autism spectrum (COA).
• Between 2006-2013 deprivation of CWA compared to COA nearly doubled as measured by eligibility for free school meals (from near 20 % to 37%)
• In 2006, CWA and COA experienced similar levels of deprivation (approx. 20%), by 2013, a considerable deprivation gap had developed, with CWA experienced 6% more deprivation than COA.
• Nearly 1/3 of primary school CWA lived in the most deprived areas in Northern Ireland.
• Nearly ½ of children with Asperger’s Syndrome who attended special school lived in the most deprived areas.

Unemployment:
• Mothers of CWA were 6% less likely to be employed than mothers of COA.
• Mothers of CWA earned 35%-56% less than mothers of COA.
• CWA were 9% less likely to live in two income families than COA.

Health:
• Pre-diagnosis, CWA were more likely than COA to have physical health problems, including walking on level ground, speech and language, hearing, eyesight, and asthma.
• Aged 3 years of age CWA experienced poorer emotional and social health than COA, this difference increased significantly by the time they were 7 years of age.
• Mothers of young CWA had lower levels of life satisfaction and poorer mental health than mothers of young COA.
Education:
• In mainstream education, children with ASD aged 11-16 years reported less satisfaction with their social relationships than COA.
• Younger children with ASD (aged 5 and 7 years) were less likely to enjoy school, were bullied more, and were more reluctant to attend school than COA.
• CWA attended school 2-3 weeks less than COA .
• Children with Asperger’s Syndrome in special schools missed the equivalent of 8-13 school days more than children with Asperger’s Syndrome in mainstream schools.
• Children with ASD attending mainstream schooling were less likely to gain 5+ GCSEs A*-C or subsequently attend university.



Further and Higher Education:
• Enrolment rates for students with ASD have risen in Further Education (FE), from 0% to 0.7%.
• Enrolment rates for students with ASD have risen in Higher Education (HE), from 0.28% to 0.45%.
• Students with ASD chose to study different subjects than students without ASD, although other factors, e.g., gender, age etc. may have played a part in subject selection.
• Students with ASD from NI were more likely than students without ASD to choose Northern Irish HE Institutions rather than study outside NI.

Participation in adult life and employment:
• A small number of adults with ASD (n=99) have benefitted from DES employment provision over the past 12 years.
• It is unknown how many adults with ASD have received employment support elsewhere (e.g. Steps to Work).

Awareness and Attitudes in the General Population:
• In both the 2003 and 2012 NI Life and Times Survey (NILTS), NI public reported positive attitudes towards the inclusion of children with ASD in mainstream education (see also BASE Project Vol. 2).

Gap Analysis Recommendations:
This was the first comprehensive secondary analysis with regards to ASD of existing large-scale data sets in Northern Ireland. Data gaps were identified and further replications would benefit from the following data inclusion:
• ASD should be recorded routinely in the following datasets:
o Census;
o Northern Ireland Survey of Activity Limitation (NISALD);
o Training for Success/Steps to work; Steps to Success;
o Travel survey;
o Hate crime; and
o Labour Force Survey.
Data should be collected on the destinations/qualifications of special school leavers.
• NILT Survey autism module should be repeated in 5 years time (2017) (see full report of 1st NILT Survey autism module 2012 in BASE Project Report Volume 2).
• General public attitudes and awareness should be assessed for children and young people, using the Young Life and Times Survey (YLT) and the Kids Life and Times Survey (KLT); (this work is underway, Dillenburger, McKerr, Schubolz, & Lloyd, 2014-2015).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Le but de cette thèse est d étendre la théorie du bootstrap aux modèles de données de panel. Les données de panel s obtiennent en observant plusieurs unités statistiques sur plusieurs périodes de temps. Leur double dimension individuelle et temporelle permet de contrôler l 'hétérogénéité non observable entre individus et entre les périodes de temps et donc de faire des études plus riches que les séries chronologiques ou les données en coupe instantanée. L 'avantage du bootstrap est de permettre d obtenir une inférence plus précise que celle avec la théorie asymptotique classique ou une inférence impossible en cas de paramètre de nuisance. La méthode consiste à tirer des échantillons aléatoires qui ressemblent le plus possible à l échantillon d analyse. L 'objet statitstique d intérêt est estimé sur chacun de ses échantillons aléatoires et on utilise l ensemble des valeurs estimées pour faire de l inférence. Il existe dans la littérature certaines application du bootstrap aux données de panels sans justi cation théorique rigoureuse ou sous de fortes hypothèses. Cette thèse propose une méthode de bootstrap plus appropriée aux données de panels. Les trois chapitres analysent sa validité et son application. Le premier chapitre postule un modèle simple avec un seul paramètre et s 'attaque aux propriétés théoriques de l estimateur de la moyenne. Nous montrons que le double rééchantillonnage que nous proposons et qui tient compte à la fois de la dimension individuelle et la dimension temporelle est valide avec ces modèles. Le rééchantillonnage seulement dans la dimension individuelle n est pas valide en présence d hétérogénéité temporelle. Le ré-échantillonnage dans la dimension temporelle n est pas valide en présence d'hétérogénéité individuelle. Le deuxième chapitre étend le précédent au modèle panel de régression. linéaire. Trois types de régresseurs sont considérés : les caractéristiques individuelles, les caractéristiques temporelles et les régresseurs qui évoluent dans le temps et par individu. En utilisant un modèle à erreurs composées doubles, l'estimateur des moindres carrés ordinaires et la méthode de bootstrap des résidus, on montre que le rééchantillonnage dans la seule dimension individuelle est valide pour l'inférence sur les coe¢ cients associés aux régresseurs qui changent uniquement par individu. Le rééchantillonnage dans la dimen- sion temporelle est valide seulement pour le sous vecteur des paramètres associés aux régresseurs qui évoluent uniquement dans le temps. Le double rééchantillonnage est quand à lui est valide pour faire de l inférence pour tout le vecteur des paramètres. Le troisième chapitre re-examine l exercice de l estimateur de différence en di¤érence de Bertrand, Duflo et Mullainathan (2004). Cet estimateur est couramment utilisé dans la littérature pour évaluer l impact de certaines poli- tiques publiques. L exercice empirique utilise des données de panel provenant du Current Population Survey sur le salaire des femmes dans les 50 états des Etats-Unis d Amérique de 1979 à 1999. Des variables de pseudo-interventions publiques au niveau des états sont générées et on s attend à ce que les tests arrivent à la conclusion qu il n y a pas d e¤et de ces politiques placebos sur le salaire des femmes. Bertrand, Du o et Mullainathan (2004) montre que la non-prise en compte de l hétérogénéité et de la dépendance temporelle entraîne d importantes distorsions de niveau de test lorsqu'on évalue l'impact de politiques publiques en utilisant des données de panel. Une des solutions préconisées est d utiliser la méthode de bootstrap. La méthode de double ré-échantillonnage développée dans cette thèse permet de corriger le problème de niveau de test et donc d'évaluer correctement l'impact des politiques publiques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L'increment de bases de dades que cada vegada contenen imatges més difícils i amb un nombre més elevat de categories, està forçant el desenvolupament de tècniques de representació d'imatges que siguin discriminatives quan es vol treballar amb múltiples classes i d'algorismes que siguin eficients en l'aprenentatge i classificació. Aquesta tesi explora el problema de classificar les imatges segons l'objecte que contenen quan es disposa d'un gran nombre de categories. Primerament s'investiga com un sistema híbrid format per un model generatiu i un model discriminatiu pot beneficiar la tasca de classificació d'imatges on el nivell d'anotació humà sigui mínim. Per aquesta tasca introduïm un nou vocabulari utilitzant una representació densa de descriptors color-SIFT, i desprès s'investiga com els diferents paràmetres afecten la classificació final. Tot seguit es proposa un mètode par tal d'incorporar informació espacial amb el sistema híbrid, mostrant que la informació de context es de gran ajuda per la classificació d'imatges. Desprès introduïm un nou descriptor de forma que representa la imatge segons la seva forma local i la seva forma espacial, tot junt amb un kernel que incorpora aquesta informació espacial en forma piramidal. La forma es representada per un vector compacte obtenint un descriptor molt adequat per ésser utilitzat amb algorismes d'aprenentatge amb kernels. Els experiments realitzats postren que aquesta informació de forma te uns resultats semblants (i a vegades millors) als descriptors basats en aparença. També s'investiga com diferents característiques es poden combinar per ésser utilitzades en la classificació d'imatges i es mostra com el descriptor de forma proposat juntament amb un descriptor d'aparença millora substancialment la classificació. Finalment es descriu un algoritme que detecta les regions d'interès automàticament durant l'entrenament i la classificació. Això proporciona un mètode per inhibir el fons de la imatge i afegeix invariança a la posició dels objectes dins les imatges. S'ensenya que la forma i l'aparença sobre aquesta regió d'interès i utilitzant els classificadors random forests millora la classificació i el temps computacional. Es comparen els postres resultats amb resultats de la literatura utilitzant les mateixes bases de dades que els autors Aixa com els mateixos protocols d'aprenentatge i classificació. Es veu com totes les innovacions introduïdes incrementen la classificació final de les imatges.