462 resultados para Quotient respiratoire


Relevância:

10.00% 10.00%

Publicador:

Resumo:

To estimate a parameter in an elliptic boundary value problem, the method of equation error chooses the value that minimizes the error in the PDE and boundary condition (the solution of the BVP having been replaced by a measurement). The estimated parameter converges to the exact value as the measured data converge to the exact value, provided Tikhonov regularization is used to control the instability inherent in the problem. The error in the estimated solution can be bounded in an appropriate quotient norm; estimates can be derived for both the underlying (infinite-dimensional) problem and a finite-element discretization that can be implemented in a practical algorithm. Numerical experiments demonstrate the efficacy and limitations of the method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In 1969, Lovasz asked whether every connected, vertex-transitive graph has a Hamilton path. This question has generated a considerable amount of interest, yet remains vastly open. To date, there exist no known connected, vertex-transitive graph that does not possess a Hamilton path. For the Cayley graphs, a subclass of vertex-transitive graphs, the following conjecture was made: Weak Lovász Conjecture: Every nontrivial, finite, connected Cayley graph is hamiltonian. The Chen-Quimpo Theorem proves that Cayley graphs on abelian groups flourish with Hamilton cycles, thus prompting Alspach to make the following conjecture: Alspach Conjecture: Every 2k-regular, connected Cayley graph on a finite abelian group has a Hamilton decomposition. Alspach’s conjecture is true for k = 1 and 2, but even the case k = 3 is still open. It is this case that this thesis addresses. Chapters 1–3 give introductory material and past work on the conjecture. Chapter 3 investigates the relationship between 6-regular Cayley graphs and associated quotient graphs. A proof of Alspach’s conjecture is given for the odd order case when k = 3. Chapter 4 provides a proof of the conjecture for even order graphs with 3-element connection sets that have an element generating a subgroup of index 2, and having a linear dependency among the other generators. Chapter 5 shows that if Γ = Cay(A, {s1, s2, s3}) is a connected, 6-regular, abelian Cayley graph of even order, and for some1 ≤ i ≤ 3, Δi = Cay(A/(si), {sj1 , sj2}) is 4-regular, and Δi ≄ Cay(ℤ3, {1, 1}), then Γ has a Hamilton decomposition. Alternatively stated, if Γ = Cay(A, S) is a connected, 6-regular, abelian Cayley graph of even order, then Γ has a Hamilton decomposition if S has no involutions, and for some s ∈ S, Cay(A/(s), S) is 4-regular, and of order at least 4. Finally, the Appendices give computational data resulting from C and MAGMA programs used to generate Hamilton decompositions of certain non-isomorphic Cayley graphs on low order abelian groups.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE: The aim of the present pilot study is to show initial results of a multimodal approach using clinical scoring, morphological magnetic resonance imaging (MRI) and biochemical T2-relaxation and diffusion-weighted imaging (DWI) in their ability to assess differences between cartilage repair tissue after microfracture therapy (MFX) and matrix-associated autologous chondrocyte transplantation (MACT). METHOD: Twenty patients were cross-sectionally evaluated at different post-operative intervals from 12 to 63 months after MFX and 12-59 months after MACT. The two groups were matched by age (MFX: 36.0+/-10.4 years; MACT: 35.1+/-7.7 years) and post-operative interval (MFX: 32.6+/-16.7 months; MACT: 31.7+/-18.3 months). After clinical evaluation using the Lysholm score, 3T-MRI was performed obtaining the MR observation of cartilage repair tissue (MOCART) score as well as T2-mapping and DWI for multi-parametric MRI. Quantitative T2-relaxation was achieved using a multi-echo spin-echo sequence; semi-quantitative diffusion-quotient (signal intensity without diffusion-weighting divided by signal intensity with diffusion weighting) was prepared by a partially balanced, steady-state gradient-echo pulse sequence. RESULTS: No differences in Lysholm (P=0.420) or MOCART (P=0.209) score were observed between MFX and MACT. T2-mapping showed lower T2 values after MFX compared to MACT (P=0.039). DWI distinguished between healthy cartilage and cartilage repair tissue in both procedures (MFX: P=0.001; MACT: P=0.007). Correlations were found between the Lysholm and the MOCART score (Pearson: 0.484; P=0.031), between the Lysholm score and DWI (Pearson:-0.557; P=0.011) and a trend between the Lysholm score and T2 (Person: 0.304; P=0.193). CONCLUSION: Using T2-mapping and DWI, additional information could be gained compared to clinical scoring or morphological MRI. In combination clinical, MR-morphological and MR-biochemical parameters can be seen as a promising multimodal tool in the follow-up of cartilage repair.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND Deproteinized bovine bone mineral (DBBM) is one of the best-documented bone substitute materials for sinus floor elevation (SFE). PURPOSE DBBM is available in two particle sizes. Large particles are believed to facilitate improved neoangiogenesis compared with small ones. However, their impact on the rate of new bone formation, osteoconduction, and DBBM degradation has never been reported. In addition, the implant stability quotient (ISQ) has never been correlated to bone-to-implant contact (BIC) after SFE with simultaneous implant placement. MATERIALS AND METHODS Bilateral SFE with simultaneous implant placement was performed in 10 Göttingen minipigs. The two sides were randomized to receive large or small particle size DBBM. Two groups of 5 minipigs healed for 6 and 12 weeks, respectively. ISQ was recorded immediately after implant placement and at sacrifice. Qualitative histological differences were described and bone formation, DBBM degradation, BIC and bone-to-DBBM contact (osteoconduction) were quantified histomorphometrically. RESULTS DBBM particle size had no qualitative or quantitative impact on the amount of newly formed bone, DBBM degradation, or BIC for either of the healing periods (p > 0.05). Small-size DBBM showed higher osteoconduction after 6 weeks than large-size DBBM (p < 0.001). After 12 weeks this difference was compensated. There was no significant correlation between BIC and ISQ. CONCLUSION Small and large particle sizes were equally predictable when DBBM was used for SFE with simultaneous implant placement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The importance of the cerebellum for non‐motor functions is becoming more and more evident. The influence on cognitive functions from acquired cerebellar lesions during childhood, however, is not well known. We present follow‐up data from 24 patients, who were operated upon during childhood for benign cerebellar tumours. The benign histology of these tumours required neither radiotherapy nor chemotherapy. Post‐operatively, these children were of normal intelligence with a mean IQ of 99.1, performance intelligence quotient (PIQ) of 101.3 and verbal intelligence quotient (VIQ) of 96.8. However, 57% of patients showed abnormalities in subtesting. In addition, more extensive neuropsychological testing revealed significant problems for attention, memory, processing speed and interference. Visuo‐constructive problems were marked for copying the Rey figure, but less pronounced for recall of the figure. Verbal fluency was more affected than design fluency. Behavioural deficits could be detected in 33% of patients. Attention deficit problems were marked in 12.5%, whereas others demonstrated psychiatric symptoms such as mutism, addiction problems, anorexia, uncontrolled temper tantrums and phobia. Age at tumour operation and size of tumour had no influence on outcome. Vermis involvement was related to an increase in neuropsychological and psychiatric problems. The observation that patients with left‐sided cerebellar tumours were more affected than patients with right‐sided tumours is probably also influenced by a more pronounced vermian involvement in the former group. In summary, this study confirms the importance of the cerebellum for cognitive development and points to the necessity of careful follow‐up for these children to provide them with the necessary help to achieve full integration into professional life.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND Non-alcoholic fatty liver disease (NAFLD) is a comorbidity of childhood obesity. OBJECTIVE We examined whole-body substrate metabolism and metabolic characteristics in obese adolescents with vs. without NAFLD. SUBJECTS Twelve obese (BMI ≥ 95th percentile) adolescents with and without NAFLD [intrahepatic triglyceride (IHTG) ≥5.0% vs. <5.0%] were pair-matched for race, gender, age and % body fat. METHODS Insulin sensitivity (IS) was assessed by a 3-h hyperinsulinemic-euglycemic clamp and whole-body substrate oxidation by indirect calorimetry during fasting and insulin-stimulated conditions. RESULTS Adolescents with NAFLD had increased (p < 0.05) abdominal fat, lipids, and liver enzymes compared with those without NAFLD. Fasting glucose concentration was not different between groups, but fasting insulin concentration was higher (p < 0.05) in the NAFLD group compared with those without. Fasting hepatic glucose production and hepatic IS did not differ (p > 0.1) between groups. Adolescents with NAFLD had higher (p < 0.05) fasting glucose oxidation and a tendency for lower fat oxidation. Adolescents with NAFLD had lower (p < 0.05) insulin-stimulated glucose disposal and lower peripheral IS compared with those without NAFLD. Although respiratory quotient (RQ) increased significantly from fasting to insulin-stimulated conditions in both groups (main effect, p < 0.001), the increase in RQ was lower in adolescents with NAFLD vs. those without (interaction, p = 0.037). CONCLUSION NAFLD in obese adolescents is associated with adverse cardiometabolic profile, peripheral insulin resistance and metabolic inflexibility.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

INTRODUCTION Post-mortem cardiac MR exams present with different contraction appearances of the left ventricle in cardiac short axis images. It was hypothesized that the grade of post-mortem contraction may be related to the post-mortem interval (PMI) or cause of death and a phenomenon caused by internal rigor mortis that may give further insights in the circumstances of death. METHOD AND MATERIALS The cardiac contraction grade was investigated in 71 post-mortem cardiac MR exams (mean age at death 52y, range 12-89y; 48 males, 23 females). In cardiac short axis images the left ventricular lumen volume as well as the left ventricular myocardial volume were assessed by manual segmentation. The quotient of both (LVQ) represents the grade of myocardial contraction. LVQ was correlated to the PMI, sex, age, cardiac weight, body mass and height, cause of death and pericardial tamponade when present. In cardiac causes of death a separate correlation was investigated for acute myocardial infarction cases and arrhythmic deaths. RESULTS LVQ values ranged from 1.99 (maximum dilatation) to 42.91 (maximum contraction) with a mean of 15.13. LVQ decreased slightly with increasing PMI, however without significant correlation. Pericardial tamponade positively correlated with higher LVQ values. Variables such as sex, age, body mass and height, cardiac weight and cause of death did not correlate with LVQ values. There was no difference in LVQ values for myocardial infarction without tamponade and arrhythmic deaths. CONCLUSION Based on the observation in our investigated cases, the phenomenon of post-mortem myocardial contraction cannot be explained by the influence of the investigated variables, except for pericardial tamponade cases. Further research addressing post-mortem myocardial contraction has to focus on other, less obvious factors, which may influence the early post-mortem phase too.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El color es un criterio básico en la evaluación de la calidad del aceite de oliva virgen y constituye una cualidad fundamental en el análisis sensorial. Este nuevo parámetro de calidad del aceite de oliva virgen puede verse afectado por la variedad y el grado de madurez de la aceituna, la zona de producción, el proceso de obtención y la conservación. Debido a la importancia que actualmente tiene la tipificación de los alimentos (por ejemplo en las denominaciones de origen), surge la necesidad de conocer el color del aceite de oliva virgen. Para su determinación se utilizaron dos métodos analíticos: la escala ABT modificada (Azul de Bromotimol) y el colorímetro HunterLab. Los resultados muestran que el cociente a/b constituye un parámetro adecuado para comparar y clasificar aceites de oliva vírgenes y brinda una mayor información en la comercialización de este producto.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent evidence that dissolved organic carbon (DOC) is a significant component of the organic carbon flux below the photic layer of the ocean (1), together with verification of high respiration rates in the dark ocean (2), suggests that the downward flux of DOC may play a major role in supporting respiration there. Here we show, on the basis of examination of the relation between DOC and apparent oxygen utilization (AOU), that the DOC flux supports ~10% of the respiration in the dark ocean. The contribution of DOC to pelagic respiration below the surface mixed layer can be inferred from the relation between DOC and apparent oxygen utilization (AOU, µM O2), a variable quantifying the cumulative oxygen consumption since a water parcel was last in contact with the atmosphere. However, assessments of DOC/AOU relations have been limited to specific regions of the ocean (3, 4) and have not considered the global ocean. We assembled a large data set (N = 9824) of concurrent DOC and AOU observations collected in cruises conducted throughout the world's oceans (fig. S1, table S1) to examine the relative contribution of DOC to AOU and, therefore, respiration in the dark ocean. AOU increased from an average (±SE) 96.3 ± 2.0 µM at the base of the surface mixed layer (100 m) to 165.5 ± 4.3 µM at the bottom of the main thermocline (1000 m), with a parallel decline in the average DOC from 53.5 ± 0.2 to 43.4 ± 0.3 µM C (Fig. 1). In contrast, there is no significant decline in DOC with increasing depth beyond 1000 m depth (Fig. 1), indicating that DOC exported with overturning circulation plays a minor role in supporting respiration in the ocean interior (5). Assuming a molar respiratory quotient of 0.69, the decline in DOC accounts for 19.6 ± 0.4% of the AOU within the top 1000 m (Fig. 1). This estimate represents, however, an upper limit, because the correlation between DOC and AOU is partly due to mixing of DOC-rich warm surface waters with DOC-poor cold thermocline waters (6). Removal of this effect by regressing DOC against AOU and water temperature indicates that DOC supports only 8.4 ± 0.3% of the respiration in the mesopelagic waters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Respiration rates of 16 calanoid copepod species from the northern Benguela upwelling system were measured on board RRS Discovery in September/October 2010 to determine their energy requirements and assess their significance in the carbon cycle. Copepod species were sampled by different net types. Immediately after the hauls, samples were sorted to species and stages (16 species; females, males and C5 copepodids) according to Bradford-Grieve et al. (1999). Specimens were kept in temperature-controlled refrigerators for at least 12 h before they were used in experiments. Respiration rates of different copepod species were measured onboard by optode respirometry (for details see Köster et al., 2008) with a 10-channel optode respirometer (PreSens Precision Sensing Oxy-10 Mini, Regensburg, Germany) under simulated in situ conditions in temperature-controlled refrigerators. Experiments were run in gas-tight glass bottles (12-13 ml). For each set of experiments, two controls without animals were measured under exactly the same conditions to compensate for potential bias. The number of animals per bottle depended on the copepods size, stage and metabolic activity. Animals were not fed during the experiments but they showed natural species-specific movements. Immediately after the experiments, all specimens were deep-frozen at - 80 °C for later dry mass determination (after lyophilisation for 48 h) in the home lab. The carbon content (% of dry mass) of each species was measured by mass-spectrometry in association with stable isotope analysis and body dry mass was converted to units of carbon. For species without available carbon data, the mean value of all copepod species (44% dry mass) was applied. For the estimation of carbon requirements of copepod species, individual oxygen consumption rates were converted to carbon units, assuming that the expiration of 1 ml oxygen mobilises 0.44 mg of organic carbon by using a respiratory quotient (RQ) of 0.82 for a mixed diet consisting of proteins (RQ = 0.8-1.0), lipids (RQ = 0.7) and carbohydrates (RQ = 1.0) (Auel and Werner, 2003). The carbon ingestion rates were calculated using the energy budget and the potential maximum ingestion rate approach. To allow for physiological comparisons of respiration rates of deep- and shallow-living copepod species without the effects of ambient temperature and different individual body mass, individual respiration rates were temperature- (15°C, Q10=2) and size-adjusted. The scaling coefficient of 0.76 (R2=0.556) is used for the standardisation of body dry mass to 0.3 mg (mean dry mass of all analysed copepods), applying the allometric equation R= (R15°C/M0.76)×0.30.76, where R is respiration and M is individual dry mass in mg.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ocean acidification affects with special intensity Arctic ecosystems, being marine photosynthetic organisms a primary target, although the consequences of this process in the carbon fluxes of Arctic algae are still unknown. The alteration of the cellular carbon balance due to physiological acclimation to an increased CO2 concentration (1300 ppm) in the common Arctic brown seaweeds Desmarestia aculeata and Alaria esculenta from Kongsfjorden (Svalbard) was analysed. Growth rate of D. aculeata was negatively affected by CO2 enrichment, while A. esculenta was positively affected, as a result of a different reorganization of the cellular carbon budget in both species. Desmarestia aculeata showed increased respiration, enhanced accumulation of storage biomolecules and elevated release of dissolved organic carbon, whereas A. esculenta showed decreased respiration and lower accumulation of storage biomolecules. Gross photosynthesis (measured both as O2 evolution and 14C fixation) was not affected in any of them, suggesting that photosynthesis was already saturated at normal CO2 conditions and did not participate in the acclimation response. However, electron transport rate changed in both species in opposite directions, indicating different energy requirements between treatments and species specificity. High CO2 levels also affected the N-metabolism, and 13C isotopic discrimination values from algal tissue pointed to a deactivation of carbon concentrating mechanisms. Since increased CO2 has the potential to modify physiological mechanisms in different ways in the species studied, it is expected that this may lead to changes in the Arctic seaweed community, which may propagate to the rest of the food web.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Let G be a reductive complex Lie group acting holomorphically on normal Stein spaces X and Y, which are locally G-biholomorphic over a common categorical quotient Q. When is there a global G-biholomorphism X → Y? If the actions of G on X and Y are what we, with justification, call generic, we prove that the obstruction to solving this local-to-global problem is topological and provide sufficient conditions for it to vanish. Our main tool is the equivariant version of Grauert's Oka principle due to Heinzner and Kutzschebauch. We prove that X and Y are G-biholomorphic if X is K-contractible, where K is a maximal compact subgroup of G, or if X and Y are smooth and there is a G-diffeomorphism ψ : X → Y over Q, which is holomorphic when restricted to each fibre of the quotient map X → Q. We prove a similar theorem when ψ is only a G-homeomorphism, but with an assumption about its action on G-finite functions. When G is abelian, we obtain stronger theorems. Our results can be interpreted as instances of the Oka principle for sections of the sheaf of G-biholomorphisms from X to Y over Q. This sheaf can be badly singular, even for a low-dimensional representation of SL2(ℂ). Our work is in part motivated by the linearisation problem for actions on ℂn. It follows from one of our main results that a holomorphic G-action on ℂn, which is locally G-biholomorphic over a common quotient to a generic linear action, is linearisable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper aims to develop a quasi-dynamic interregional input-output model for evaluating the macro-economic impacts of small city development. The features of the model are summarized as follows: (1) the consumption expenditure of households is regarded as an endogenous variable, (2) the technological change is determined by the change of industrial Location Quotient caused by firm's investment activities. (3) a strong feedback function between the city design and the economic analysis is provided. For checking the performance of the model, Saemangeum's Flux City Design Plan is used as the simulation target in our paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper investigates the current situation of industrial agglomeration in Costa Rica, utilizing firm-level panel data for the period 2008-2012. We calculated Location Quotient and Theil Index based on employment by industry and found that 14 cantons have the industrial agglomerations for 9 industries. The analysis is in line with the nature of specific industries, the development of areas of concentration around free zones, and the evolving participation of Costa Rica in GVCs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Esta tesis trata sobre métodos de corrección que compensan la variación de las condiciones de iluminación en aplicaciones de imagen y video a color. Estas variaciones hacen que a menudo fallen aquellos algoritmos de visión artificial que utilizan características de color para describir los objetos. Se formulan tres preguntas de investigación que definen el marco de trabajo de esta tesis. La primera cuestión aborda las similitudes que se dan entre las imágenes de superficies adyacentes en relación a su comportamiento fotométrico. En base al análisis del modelo de formación de imágenes en situaciones dinámicas, esta tesis propone un modelo capaz de predecir las variaciones de color de la región de una determinada imagen a partir de las variaciones de las regiones colindantes. Dicho modelo se denomina Quotient Relational Model of Regions. Este modelo es válido cuando: las fuentes de luz iluminan todas las superficies incluídas en él; estas superficies están próximas entre sí y tienen orientaciones similares; y cuando son en su mayoría lambertianas. Bajo ciertas circunstancias, la respuesta fotométrica de una región se puede relacionar con el resto mediante una combinación lineal. No se ha podido encontrar en la literatura científica ningún trabajo previo que proponga este tipo de modelo relacional. La segunda cuestión va un paso más allá y se pregunta si estas similitudes se pueden utilizar para corregir variaciones fotométricas desconocidas en una región también desconocida, a partir de regiones conocidas adyacentes. Para ello, se propone un método llamado Linear Correction Mapping capaz de dar una respuesta afirmativa a esta cuestión bajo las circunstancias caracterizadas previamente. Para calcular los parámetros del modelo se requiere una etapa de entrenamiento previo. El método, que inicialmente funciona para una sola cámara, se amplía para funcionar en arquitecturas con varias cámaras sin solape entre sus campos visuales. Para ello, tan solo se necesitan varias muestras de imágenes del mismo objeto capturadas por todas las cámaras. Además, este método tiene en cuenta tanto las variaciones de iluminación, como los cambios en los parámetros de exposición de las cámaras. Todos los métodos de corrección de imagen fallan cuando la imagen del objeto que tiene que ser corregido está sobreexpuesta o cuando su relación señal a ruido es muy baja. Así, la tercera cuestión se refiere a si se puede establecer un proceso de control de la adquisición que permita obtener una exposición óptima cuando las condiciones de iluminación no están controladas. De este modo, se propone un método denominado Camera Exposure Control capaz de mantener una exposición adecuada siempre y cuando las variaciones de iluminación puedan recogerse dentro del margen dinámico de la cámara. Los métodos propuestos se evaluaron individualmente. La metodología llevada a cabo en los experimentos consistió en, primero, seleccionar algunos escenarios que cubrieran situaciones representativas donde los métodos fueran válidos teóricamente. El Linear Correction Mapping fue validado en tres aplicaciones de re-identificación de objetos (vehículos, caras y personas) que utilizaban como caracterísiticas la distribución de color de éstos. Por otra parte, el Camera Exposure Control se probó en un parking al aire libre. Además de esto, se definieron varios indicadores que permitieron comparar objetivamente los resultados de los métodos propuestos con otros métodos relevantes de corrección y auto exposición referidos en el estado del arte. Los resultados de la evaluación demostraron que los métodos propuestos mejoran los métodos comparados en la mayoría de las situaciones. Basándose en los resultados obtenidos, se puede decir que las respuestas a las preguntas de investigación planteadas son afirmativas, aunque en circunstancias limitadas. Esto quiere decir que, las hipótesis planteadas respecto a la predicción, la corrección basada en ésta y la auto exposición, son factibles en aquellas situaciones identificadas a lo largo de la tesis pero que, sin embargo, no se puede garantizar que se cumplan de manera general. Por otra parte, se señalan como trabajo de investigación futuro algunas cuestiones nuevas y retos científicos que aparecen a partir del trabajo presentado en esta tesis. ABSTRACT This thesis discusses the correction methods used to compensate the variation of lighting conditions in colour image and video applications. These variations are such that Computer Vision algorithms that use colour features to describe objects mostly fail. Three research questions are formulated that define the framework of the thesis. The first question addresses the similarities of the photometric behaviour between images of dissimilar adjacent surfaces. Based on the analysis of the image formation model in dynamic situations, this thesis proposes a model that predicts the colour variations of the region of an image from the variations of the surrounded regions. This proposed model is called the Quotient Relational Model of Regions. This model is valid when the light sources illuminate all of the surfaces included in the model; these surfaces are placed close each other, have similar orientations, and are primarily Lambertian. Under certain circumstances, a linear combination is established between the photometric responses of the regions. Previous work that proposed such a relational model was not found in the scientific literature. The second question examines whether those similarities could be used to correct the unknown photometric variations in an unknown region from the known adjacent regions. A method is proposed, called Linear Correction Mapping, which is capable of providing an affirmative answer under the circumstances previously characterised. A training stage is required to determine the parameters of the model. The method for single camera scenarios is extended to cover non-overlapping multi-camera architectures. To this extent, only several image samples of the same object acquired by all of the cameras are required. Furthermore, both the light variations and the changes in the camera exposure settings are covered by correction mapping. Every image correction method is unsuccessful when the image of the object to be corrected is overexposed or the signal-to-noise ratio is very low. Thus, the third question refers to the control of the acquisition process to obtain an optimal exposure in uncontrolled light conditions. A Camera Exposure Control method is proposed that is capable of holding a suitable exposure provided that the light variations can be collected within the dynamic range of the camera. Each one of the proposed methods was evaluated individually. The methodology of the experiments consisted of first selecting some scenarios that cover the representative situations for which the methods are theoretically valid. Linear Correction Mapping was validated using three object re-identification applications (vehicles, faces and persons) based on the object colour distributions. Camera Exposure Control was proved in an outdoor parking scenario. In addition, several performance indicators were defined to objectively compare the results with other relevant state of the art correction and auto-exposure methods. The results of the evaluation demonstrated that the proposed methods outperform the compared ones in the most situations. Based on the obtained results, the answers to the above-described research questions are affirmative in limited circumstances, that is, the hypothesis of the forecasting, the correction based on it, and the auto exposure are feasible in the situations identified in the thesis, although they cannot be guaranteed in general. Furthermore, the presented work raises new questions and scientific challenges, which are highlighted as future research work.