986 resultados para First-derivative UV spectrophotometry


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present work aims to study the characteristics of the alloy Al - 7 % Si - 0 , 3Mg ( AA356 ) , more specifically characterize the macrostructure and microstructure and mechanical properties of the alloy ingots AA356 obtained in metal molds and sand molds for power studying the structures through the difference of cooling rates . This alloy is explained by the fact of referring league has excellent combination of properties such as low solidification shrinkage and good fluidity, good weldability , high wear resistance , high strength to weight ratio, has wide application in general engineering , and particularly in the automotive and aerospace engineering . In this work we will verify this difference in properties through two different cooling rates . We monitor the solid solidification temperatures by thermocouples building with them the cooling curve as a tool that will aid us to evaluate the effectiveness of the grain refining because it achieved with some important properties of the alloy as the latent heat of solidification fraction the liquid and solid temperatures, the total solidification time, and identify the presence of inoculants for grain refinement. Thermal analysis will be supported by the study of graphic software “Origin “will be achieved where the cooling curve and its first derivative that is the cooling rate. Made thermal analysis, analysis will be made in macrographs ingots obtained for observation of macrostructures obtained in both types of ingots and also analysis of micrographs where sampling will occur in strategic positions ingots to correlate with the microstructure. Finally will be collecting data from Brinell hardness of ingots and so then correlating the properties of their respective ingots with cooling rate. We found that obtained with cast metal ingots showed superior properties to the ingots obtained with sand mold

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of the present study was to analyze the visual control of braking a bicycle when the cyclist is surprised by an obstacle in his way. According to Lee (1976), visually controlled braking based on time to collision information utilizes the optic variables tau and its first derivative in time, tau-dot, to initiating the braking action and regulating its intensity. Seven young adults performed a bicycle braking task in curvilinear trajectory under distinct velocity (high, medium, and low) and uncertainty (certainty and uncertainty) conditions. Results showed that, independently of velocity and uncertainty levels, participants utilized tau and tau-dot to initiating and regulating the braking action, avoiding collision with the obstacle. Cognitive, attentional, and other psychological factors resulting from both increased velocity and uncertainty were not capable of altering the use of time to collision information, corroborating the tested hypothesis

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective. To evaluate bacterial growth inhibition, mechanical properties, and compound release rate and stability of copolymers incorporated with anthocyanin (ACY; Vaccinium macrocarpon). Methods. Resin samples were prepared (Bis-GMA/TEGDMA at 70/30 mol%) and incorporated with 2 w/w% of either ACY or chlorhexidine (CHX), except for the control group. Samples were individually immersed in a bacterial culture (Streptococcus mutans) for 24 h. Cell viability (n = 3) was assessed by counting the number of colony forming units on replica agar plates. Flexural strength (FS) and elastic modulus (E) were tested on a universal testing machine (n = 8). Compound release and chemical stability were evaluated by UV spectrophotometry and (1)H NMR (n = 3). Data were analyzed by one-way ANOVA and Tukey's test ( α = 0.05). Results. Both compounds inhibited S. mutans growth, with CHX being most effective (P < 0.05). Control resin had the lowest FS and E values, followed by ACY and CHX, with statistical difference between control and CHX groups for both mechanical properties (P < 0.05). The 24 h compound release rates were ACY: 1.33 μg/mL and CHX: 1.92 μg/mL. (1)H NMR spectra suggests that both compounds remained stable after being released in water. Conclusion. The present findings indicate that anthocyanins might be used as a natural antibacterial agent in resin based materials.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sodium azumolene is a drug designed to fight Malignant Hyperthermia (MH), which is characterized by genetic predisposition and triggered by the use of inhalational anesthetics. This drug is shown as a water-soluble analogue of dantrolene sodium, 30-folds more water soluble, which gives advantages for its emergency use. To our knowledge there is no analytical method for sodium zaumolene raw material or dosage form published so far. The objective of the present investigation was to develop and validate analytical methods to achieve sodium azumolene chemical identification and quantification. The sodium azumolene was characterized regarding its thermal behavior, by differential thermal analysis and thermogravimetric analysis; Visible, UV and infrared absorption. To accurately assess the sodium Azumolene content three different analytical methods (visible and UV spectrophotometry and high performance liquid chromatography) were developed and validated. All methods showed to be linear, accurate, precise and reliable. Azumolene has shown to be equipotent to dantrolene in the treatment and prevention of an MH crisis and the great advantage compared to dantrolene is better water solubility. This study has characterized the sodium azumolene and presents new analytical methods which have not been reported so far.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lignin is a macromolecule frequently obtained as residue during technological processing of biomass. Modifications in chemical structure of lignin generate valuable products, some with particular and unique characteristics. One of the available methods for modification of industrial lignin is oxidation by hydrogen peroxide. In this work, we conducted systematic studies of the oxidation process that were carried out at various pHs and oxidizing agent concentrations. Biophysical, biochemical, structural properties of the oxidized lignin were analyzed by UV spectrophotometry, Fourier transform infrared spectroscopy, scanning electron microscopy and small angle X-ray scattering. Our results reveal that lignin oxidized with 9.1% H(2)O(2) (m/v) at pH 13.3 has the highest fragmentation, oxidation degree and stability. Although this processing condition might be considered quite severe, we have concluded that the stability of the obtained oxidized lignin was greatly increased. Therefore, the identified processing conditions of oxidation may be of practical interest for industrial applications. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work the differentiability of the principal eigenvalue lambda = lambda(1)(Gamma) to the localized Steklov problem -Delta u + qu = 0 in Omega, partial derivative u/partial derivative nu = lambda chi(Gamma)(x)u on partial derivative Omega, where Gamma subset of partial derivative Omega is a smooth subdomain of partial derivative Omega and chi(Gamma) is its characteristic function relative to partial derivative Omega, is shown. As a key point, the flux subdomain Gamma is regarded here as the variable with respect to which such differentiation is performed. An explicit formula for the derivative of lambda(1) (Gamma) with respect to Gamma is obtained. The lack of regularity up to the boundary of the first derivative of the principal eigenfunctions is a further intrinsic feature of the problem. Therefore, the whole analysis must be done in the weak sense of H(1)(Omega). The study is of interest in mathematical models in morphogenesis. (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Background In spite of a large amount of studies in anesthetized animals, isolated hearts, and in vitro cardiomyocytes, to our knowledge, myocardial function was never studied in conscious diabetic rats. Myocardial performance and the response to stress caused by dobutamine were examined in conscious rats, fifteen days after the onset of diabetes caused by streptozotocin (STZ). The protective effect of insulin was also investigated in STZ-diabetic rats. Methods Cardiac contractility and relaxation were evaluated by means of maximum positive (+dP/dtmax) and negative (-dP/dtmax) values of first derivative of left ventricular pressure over time. In addition, it was examined the myocardial response to stress caused by two dosages (1 and 15 μg/kg) of dobutamine. One-way analysis of variance (ANOVA) was used to compare differences among groups, and two-way ANOVA for repeated measure, followed by Tukey post hoc test, to compare the responses to dobutamine. Differences were considered significant if P < 0.05. Results Basal mean arterial pressure, heart rate, +dP/dtmax and -dP/dtmax were found decreased in STZ-diabetic rats, but unaltered in control rats treated with vehicle and STZ-diabetic rats treated with insulin. Therefore, insulin prevented the hemodynamic and myocardial function alterations observed in STZ-diabetic rats. Lower dosage of dobutamine increased heart rate, +dP/dtmax and -dP/dtmax only in STZ-diabetic rats, while the higher dosage promoted greater, but similar, responses in the three groups. In conclusion, the results indicate that myocardial function was remarkably attenuated in conscious STZ-diabetic rats. In addition, the lower dosage of dobutamine uncovered a greater responsiveness of the myocardium of STZ-diabetic rats. Insulin preserved myocardial function and the integrity of the response to dobutamine of STZ-diabetic rats. Conclusion The present study provides new data from conscious rats showing that the cardiomyopathy of this pathophysiological condition was expressed by low indices of contractility and relaxation. In addition, it was also demonstrated that these pathophysiological features were prevented by the treatment with insulin.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Therapeutisches Drug Monitoring (TDM) wird zur individuellen Dosiseinstellung genutzt, um die Effizienz der Medikamentenwirkung zu steigern und das Auftreten von Nebenwirkungen zu senken. Für das TDM von Antipsychotika und Antidepressiva besteht allerdings das Problem, dass es mehr als 50 Medikamente gibt. Ein TDM-Labor muss dementsprechend über 50 verschiedene Wirkstoffe und zusätzlich aktive Metaboliten messen. Mit der Flüssigchromatographie (LC oder HPLC) ist die Analyse vieler unterschiedlicher Medikamente möglich. LC mit Säulenschaltung erlaubt eine Automatisierung. Dabei wird Blutserum oder -plasma mit oder ohne vorherige Proteinfällung auf eine Vorsäule aufgetragen. Nach Auswaschen von störenden Matrixbestandteilen werden die Medikamente auf einer nachgeschalteten analytischen Säule getrennt und über Ultraviolettspektroskopie (UV) oder Massenspektrometrie (MS) detektiert. Ziel dieser Arbeit war es, LC-Methoden zu entwickeln, die die Messung möglichst vieler Antipsychotika und Antidepressiva erlaubt und die für die TDM-Routine geeignet ist. Eine mit C8-modifiziertem Kieselgel gefüllte Säule (20 µm 10x4.0 mm I.D.) erwies sich in Vorexperimenten als optimal geeignet bezüglich Extraktionsverhalten, Regenerierbarkeit und Stabilität. Mit einer ersten HPLC-UV-Methode mit Säulenschaltung konnten 20 verschiedene Psychopharmaka einschließlich ihrer Metabolite, also insgesamt 30 verschiedene Substanzen quantitativ erfasst werden. Die Analysenzeit betrug 30 Minuten. Die Vorsäule erlaubte 150 Injektionen, die analytische Säule konnte mit mehr als 300 Plasmainjektionen belastet werden. Abhängig vom Analyten, musste allerdings das Injektionsvolumen, die Flussrate oder die Detektionswellenlänge verändert werden. Die Methode war daher für eine Routineanwendung nur eingeschränkt geeignet. Mit einer zweiten HPLC-UV-Methode konnten 43 verschiedene Antipsychotika und Antidepressiva inklusive Metaboliten nachgewiesen werden. Nach Vorreinigung über C8-Material (10 µm, 10x4 mm I.D.) erfolgte die Trennung auf Hypersil ODS (5 µm Partikelgröße) in der analytischen Säule (250x4.6 mm I.D.) mit 37.5% Acetonitril im analytischen Eluenten. Die optimale Flussrate war 1.5 ml/min und die Detektionswellenlänge 254 nm. In einer Einzelprobe, konnten mit dieser Methode 7 bis 8 unterschiedliche Substanzen gemessen werden. Für die Antipsychotika Clozapin, Olanzapin, Perazin, Quetiapin und Ziprasidon wurde die Methode validiert. Der Variationskoeffizient (VK%) für die Impräzision lag zwischen 0.2 und 6.1%. Im erforderlichen Messbereich war die Methode linear (Korrelationskoeffizienten, R2 zwischen 0.9765 und 0.9816). Die absolute und analytische Wiederfindung lagen zwischen 98 und 118 %. Die für das TDM erforderlichen unteren Nachweisgrenzen wurden erreicht. Für Olanzapin betrug sie 5 ng/ml. Die Methode wurde an Patienten für das TDM getestet. Sie erwies sich für das TDM als sehr gut geeignet. Nach retrospektiver Auswertung von Patientendaten konnte erstmalig ein möglicher therapeutischer Bereich für Quetiapin (40-170 ng/ml) und Ziprasidon (40-130 ng/ml) formuliert werden. Mit einem Massenspektrometer als Detektor war die Messung von acht Neuroleptika und ihren Metaboliten möglich. 12 Substanzen konnten in einem Lauf bestimmt werden: Amisulprid, Clozapin, N-Desmethylclozapin, Clozapin-N-oxid, Haloperidol, Risperidon, 9-Hydroxyrisperidon, Olanzapin, Perazin, N-Desmethylperazin, Quetiapin und Ziprasidon. Nach Vorreinigung mit C8-Material (20 µm 10x4.0 mm I.D.) erfolgte die Trennung auf Synergi MAX-RP C12 (4 µm 150 x 4.6 mm). Die Validierung der HPLC-MS-Methode belegten einen linearen Zusammenhang zwischen Konzentration und Detektorsignal (R2= 0,9974 bis 0.9999). Die Impräzision lag zwischen 0.84 bis 9.78%. Die für das TDM erforderlichen unteren Nachweisgrenzen wurden erreicht. Es gab keine Hinweise auf das Auftreten von Ion Suppression durch Matrixbestandteile. Die absolute und analytische Wiederfindung lag zwischen 89 und 107 %. Es zeigte sich, dass die HPLC-MS-Methode ohne Modifikation erweitert werden kann und anscheinend mehr als 30 verschiedene Psychopharmaka erfasst werden können. Mit den entwickelten flüssigchromatographischen Methoden stehen neue Verfahren für das TDM von Antipsychotika und Antidepressiva zur Verfügung, die es erlauben, mit einer Methode verschiedene Psychopharmaka und ihre aktiven Metabolite zu messen. Damit kann die Behandlung psychiatrischer Patienten insbesondere mit Antipsychotika verbessert werden.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A water desaturation zone develops around a tunnel in water-saturated rock when the evaporative water loss at the rock surface is larger than the water flow from the surrounding saturated region of restricted permeability. We describe the methods with which such water desaturation processes in rock materials can be quantified. The water retention characteristic theta(psi) of crystalline rock samples was determined with a pressure membrane apparatus. The negative water potential, identical to the capillary pressure, psi, below the tensiometric range (psi < -0.1 MPa) can be measured with thermocouple psychrometers (TP), and the volumetric water contents, theta, by means of time domain reflectometry (TDR). These standard methods were adapted for measuring the water status in a macroscopically unfissured granodiorite with a total porosity of approximately 0.01. The measured water retention curve of granodiorite samples from the Grimsel test site (central Switzerland) exhibits a shape which is typical for bimodal pore size distributions. The measured bimodality is probably an artifact of a large surface ratio of solid/voids. The thermocouples were installed without a metallic screen using the cavity drilled into the granodiorite as a measuring chamber. The water potentials observed in a cylindrical granodiorite monolith ranged between -0.1 and -3.0 MPa; those near the wall in a ventilated tunnel between -0.1 and -2.2 MPa. Two types of three-rod TDR Probes were used, one as a depth probe inserted into the rock, the other as a surface probe using three copper stripes attached to the surface for detecting water content changes in the rock-to-air boundary. The TDR signal was smoothed with a low-pass filter, and the signal length determined based on the first derivative of the trace. Despite the low porosity of crystalline rock these standard methods are applicable to describe the unsaturated zone in solid rock and may also be used in other consolidated materials such as concrete.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background A recent method determines regional gas flow of the lung by electrical impedance tomography (EIT). The aim of this study is to show the applicability of this method in a porcine model of mechanical ventilation in healthy and diseased lungs. Our primary hypothesis is that global gas flow measured by EIT can be correlated with spirometry. Our secondary hypothesis is that regional analysis of respiratory gas flow delivers physiologically meaningful results. Methods In two sets of experiments n = 7 healthy pigs and n = 6 pigs before and after induction of lavage lung injury were investigated. EIT of the lung and spirometry were registered synchronously during ongoing mechanical ventilation. In-vivo aeration of the lung was analysed in four regions-of-interest (ROI) by EIT: 1) global, 2) ventral (non-dependent), 3) middle and 4) dorsal (dependent) ROI. Respiratory gas flow was calculated by the first derivative of the regional aeration curve. Four phases of the respiratory cycle were discriminated. They delivered peak and late inspiratory and expiratory gas flow (PIF, LIF, PEF, LEF) characterizing early or late inspiration or expiration. Results Linear regression analysis of EIT and spirometry in healthy pigs revealed a very good correlation measuring peak flow and a good correlation detecting late flow. PIFEIT = 0.702 · PIFspiro + 117.4, r2 = 0.809; PEFEIT = 0.690 · PEFspiro-124.2, r2 = 0.760; LIFEIT = 0.909 · LIFspiro + 27.32, r2 = 0.572 and LEFEIT = 0.858 · LEFspiro-10.94, r2 = 0.647. EIT derived absolute gas flow was generally smaller than data from spirometry. Regional gas flow was distributed heterogeneously during different phases of the respiratory cycle. But, the regional distribution of gas flow stayed stable during different ventilator settings. Moderate lung injury changed the regional pattern of gas flow. Conclusions We conclude that the presented method is able to determine global respiratory gas flow of the lung in different phases of the respiratory cycle. Additionally, it delivers meaningful insight into regional pulmonary characteristics, i.e. the regional ability of the lung to take up and to release air.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study the relativistic version of the Schrödinger equation for a point particle in one dimension with the potential of the first derivative of the delta function. The momentum cutoff regularization is used to study the bound state and scattering states. The initial calculations show that the reciprocal of the bare coupling constant is ultraviolet divergent, and the resultant expression cannot be renormalized in the usual sense, where the divergent terms can just be omitted. Therefore, a general procedure has been developed to derive different physical properties of the system. The procedure is used first in the nonrelativistic case for the purpose of clarification and comparisons. For the relativistic case, the results show that this system behaves exactly like the delta function potential, which means that this system also shares features with quantum filed theories, like being asymptotically free. In addition, in the massless limit, it undergoes dimensional transmutation, and it possesses an infrared conformal fixed point. The comparison of the solution with the relativistic delta function potential solution shows evidence of universality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A composite strontium isotopic seawater curve was constructed for the Miocene between 24 and 6 Ma by combining 87Sr/86Sr measurements of planktonic foraminifera from Deep Sea Drilling Project sites 289 and 588. Site 289, with its virtually continuous sedimentary record and high sedimentation rates (26 m/m.y.), was used for constructing the Oligocene to mid-Miocene part of the record, which included the calibration of 63 biostratigraphic datums to the Sr seawater curve using the timescale of Cande and Kent (1992 doi:10.1029/92JB01202). Across the Oligocene/Miocene boundary, a brief plateau occurred in the Sr seawater curve (87Sr/86Sr values averaged 0.70824) which is coincident with a carbon isotopic maximum (CM-O/M) from 24.3 to 22.6 Ma. During the early Miocene, the strontium isotopic curve was marked by a steep rise in 87Sr/86Sr that included a break in slope near 19 Ma. The rate of growth was about 60 ppm/m.y. between 22.5 and 19.0 Ma and increased to over 80 ppm/m.y. between 19.0 and 16 Ma. Beginning at ~16 Ma (between carbon isotopic maxima CM3 and CM4 of Woodruff and Savin (1991 doi:10.1029/91PA02561)), the rate of 87Sr/86Sr growth slowed and 87Sr/86Sr values were near constant from 15 to 13 Ma. After 13 Ma, growth in 87Sr/86Sr resumed and continued until ~9 Ma, when the rate of 87Sr/86Sr growth decreased to zero once again. The entire Miocene seawater curve can be described by a high-order function, and the first derivative (d87Sr/86Sr/dt) of this function reveals two periods of increased slope. The greatest rate of 87Sr/86Sr change occurred during the early Miocene between ~20 and 16 Ma, and a smaller, but distinct, period of increased slope also occurred during the late Miocene between ~12 and 9 Ma. These periods of steepened slope coincide with major phases of uplift and denudation of the Himalayan-Tibetan Plateau region, supporting previous interpretations that the primary control on seawater 87Sr/86Sr during the Miocene was related to the collision of India and Asia. The rapid increase in 87Sr/86Sr values during the early Miocene from 20 to 16 Ma imply high rates of chemical weathering and dissolved riverine fluxes to the oceans. In the absence of another source of CO2, these high rates of chemical weathering should have quickly resulted in a drawdown of atmospheric CO2 and climatic cooling through a reversed greenhouse effect. The paleoclimatic record, however, indicates a warming trend during the early Miocene, culminating in a climatic optimum between 17 and 14.5 Ma. We suggest that the high rates of chemical erosion and warm temperatures during the climatic optimum were caused by an increase in the contribution of volcanic CO2 from the eruption of the Columbia River Flood Basalts (CRFB) between 17 and 15 Ma. The decrease in the rate of CRFB eruptions at 15 Ma and the removal of atmospheric carbon dioxide by increased organic carbon burial in Monterey deposits eventually led to cooling and increased glaciation between ~14.5 and 13 Ma. The CRFB hypothesis helps to explain the significant time lag between the onset of increased rates of organic carbon burial in the Monterey at 17.5 Ma (as marked by increased delta13C values) and the climatic cooling and glaciation during the middle Miocene (as marked by the increase in delta18O values), which did not begin until ~14.5 Ma.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Se presenta en esta comunicación el tratamiento de problemas de potencial en sistemas bidimensionales, haciendo uso de la discretización de su contorno o frontera mediante elementos parabólicos tanto en geometría como en las variables de campo. Se estudian las ventajas frente al uso de elementos isoparamétricos lineales dentro de la teoría del potencial. Se presenta también un estudio sobre las zonas singulares a que dan lugar los elementos parabólicos degenerados = This paper presents a B.I.E.M. for potential theory, using in the discretization a completely isoparametric parabolic formulation; that is, the field variable, its first derivative and the boundary domain are interpolated using second orden piecewise polinomic. Several results are presented and comparison is mode with other simpler formulations. Also treated is the posibility of modelling singular behavior by moving the midside mode of selected elements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Project you are about to see it is based on the technologies used on object detection and recognition, especially on leaves and chromosomes. To do so, this document contains the typical parts of a scientific paper, as it is what it is. It is composed by an Abstract, an Introduction, points that have to do with the investigation area, future work, conclusions and references used for the elaboration of the document. The Abstract talks about what are we going to find in this paper, which is technologies employed on pattern detection and recognition for leaves and chromosomes and the jobs that are already made for cataloguing these objects. In the introduction detection and recognition meanings are explained. This is necessary as many papers get confused with these terms, specially the ones talking about chromosomes. Detecting an object is gathering the parts of the image that are useful and eliminating the useless parts. Summarizing, detection would be recognizing the objects borders. When talking about recognition, we are talking about the computers or the machines process, which says what kind of object we are handling. Afterwards we face a compilation of the most used technologies in object detection in general. There are two main groups on this category: Based on derivatives of images and based on ASIFT points. The ones that are based on derivatives of images have in common that convolving them with a previously created matrix does the treatment of them. This is done for detecting borders on the images, which are changes on the intensity of the pixels. Within these technologies we face two groups: Gradian based, which search for maximums and minimums on the pixels intensity as they only use the first derivative. The Laplacian based methods search for zeros on the pixels intensity as they use the second derivative. Depending on the level of details that we want to use on the final result, we will choose one option or the other, because, as its logic, if we used Gradian based methods, the computer will consume less resources and less time as there are less operations, but the quality will be worse. On the other hand, if we use the Laplacian based methods we will need more time and resources as they require more operations, but we will have a much better quality result. After explaining all the derivative based methods, we take a look on the different algorithms that are available for both groups. The other big group of technologies for object recognition is the one based on ASIFT points, which are based on 6 image parameters and compare them with another image taking under consideration these parameters. These methods disadvantage, for our future purposes, is that it is only valid for one single object. So if we are going to recognize two different leaves, even though if they refer to the same specie, we are not going to be able to recognize them with this method. It is important to mention these types of technologies as we are talking about recognition methods in general. At the end of the chapter we can see a comparison with pros and cons of all technologies that are employed. Firstly comparing them separately and then comparing them all together, based on our purposes. Recognition techniques, which are the next chapter, are not really vast as, even though there are general steps for doing object recognition, every single object that has to be recognized has its own method as the are different. This is why there is not a general method that we can specify on this chapter. We now move on into leaf detection techniques on computers. Now we will use the technique explained above based on the image derivatives. Next step will be to turn the leaf into several parameters. Depending on the document that you are referring to, there will be more or less parameters. Some papers recommend to divide the leaf into 3 main features (shape, dent and vein] and doing mathematical operations with them we can get up to 16 secondary features. Next proposition is dividing the leaf into 5 main features (Diameter, physiological length, physiological width, area and perimeter] and from those, extract 12 secondary features. This second alternative is the most used so it is the one that is going to be the reference. Following in to leaf recognition, we are based on a paper that provides a source code that, clicking on both leaf ends, it automatically tells to which specie belongs the leaf that we are trying to recognize. To do so, it only requires having a database. On the tests that have been made by the document, they assure us a 90.312% of accuracy over 320 total tests (32 plants on the database and 10 tests per specie]. Next chapter talks about chromosome detection, where we shall pass the metaphasis plate, where the chromosomes are disorganized, into the karyotype plate, which is the usual view of the 23 chromosomes ordered by number. There are two types of techniques to do this step: the skeletonization process and swiping angles. Skeletonization progress consists on suppressing the inside pixels of the chromosome to just stay with the silhouette. This method is really similar to the ones based on the derivatives of the image but the difference is that it doesnt detect the borders but the interior of the chromosome. Second technique consists of swiping angles from the beginning of the chromosome and, taking under consideration, that on a single chromosome we cannot have more than an X angle, it detects the various regions of the chromosomes. Once the karyotype plate is defined, we continue with chromosome recognition. To do so, there is a technique based on the banding that chromosomes have (grey scale bands] that make them unique. The program then detects the longitudinal axis of the chromosome and reconstructs the band profiles. Then the computer is able to recognize this chromosome. Concerning the future work, we generally have to independent techniques that dont reunite detection and recognition, so our main focus would be to prepare a program that gathers both techniques. On the leaf matter we have seen that, detection and recognition, have a link as both share the option of dividing the leaf into 5 main features. The work that would have to be done is to create an algorithm that linked both methods, as in the program, which recognizes leaves, it has to be clicked both leaf ends so it is not an automatic algorithm. On the chromosome side, we should create an algorithm that searches for the beginning of the chromosome and then start to swipe angles, to later give the parameters to the program that searches for the band profiles. Finally, on the summary, we explain why this type of investigation is needed, and that is because with global warming, lots of species (animals and plants] are beginning to extinguish. That is the reason why a big database, which gathers all the possible species, is needed. For recognizing animal species, we just only have to have the 23 chromosomes. While recognizing a plant, there are several ways of doing it, but the easiest way to input a computer is to scan the leaf of the plant. RESUMEN. El proyecto que se puede ver a continuación trata sobre las tecnologías empleadas en la detección y reconocimiento de objetos, especialmente de hojas y cromosomas. Para ello, este documento contiene las partes típicas de un paper de investigación, puesto que es de lo que se trata. Así, estará compuesto de Abstract, Introducción, diversos puntos que tengan que ver con el área a investigar, trabajo futuro, conclusiones y biografía utilizada para la realización del documento. Así, el Abstract nos cuenta qué vamos a poder encontrar en este paper, que no es ni más ni menos que las tecnologías empleadas en el reconocimiento y detección de patrones en hojas y cromosomas y qué trabajos hay existentes para catalogar a estos objetos. En la introducción se explican los conceptos de qué es la detección y qué es el reconocimiento. Esto es necesario ya que muchos papers científicos, especialmente los que hablan de cromosomas, confunden estos dos términos que no podían ser más sencillos. Por un lado tendríamos la detección del objeto, que sería simplemente coger las partes que nos interesasen de la imagen y eliminar aquellas partes que no nos fueran útiles para un futuro. Resumiendo, sería reconocer los bordes del objeto de estudio. Cuando hablamos de reconocimiento, estamos refiriéndonos al proceso que tiene el ordenador, o la máquina, para decir qué clase de objeto estamos tratando. Seguidamente nos encontramos con un recopilatorio de las tecnologías más utilizadas para la detección de objetos, en general. Aquí nos encontraríamos con dos grandes grupos de tecnologías: Las basadas en las derivadas de imágenes y las basadas en los puntos ASIFT. El grupo de tecnologías basadas en derivadas de imágenes tienen en común que hay que tratar a las imágenes mediante una convolución con una matriz creada previamente. Esto se hace para detectar bordes en las imágenes que son básicamente cambios en la intensidad de los píxeles. Dentro de estas tecnologías nos encontramos con dos grupos: Los basados en gradientes, los cuales buscan máximos y mínimos de intensidad en la imagen puesto que sólo utilizan la primera derivada; y los Laplacianos, los cuales buscan ceros en la intensidad de los píxeles puesto que estos utilizan la segunda derivada de la imagen. Dependiendo del nivel de detalles que queramos utilizar en el resultado final nos decantaremos por un método u otro puesto que, como es lógico, si utilizamos los basados en el gradiente habrá menos operaciones por lo que consumirá más tiempo y recursos pero por la contra tendremos menos calidad de imagen. Y al revés pasa con los Laplacianos, puesto que necesitan más operaciones y recursos pero tendrán un resultado final con mejor calidad. Después de explicar los tipos de operadores que hay, se hace un recorrido explicando los distintos tipos de algoritmos que hay en cada uno de los grupos. El otro gran grupo de tecnologías para el reconocimiento de objetos son los basados en puntos ASIFT, los cuales se basan en 6 parámetros de la imagen y la comparan con otra imagen teniendo en cuenta dichos parámetros. La desventaja de este método, para nuestros propósitos futuros, es que sólo es valido para un objeto en concreto. Por lo que si vamos a reconocer dos hojas diferentes, aunque sean de la misma especie, no vamos a poder reconocerlas mediante este método. Aún así es importante explicar este tipo de tecnologías puesto que estamos hablando de técnicas de reconocimiento en general. Al final del capítulo podremos ver una comparación con los pros y las contras de todas las tecnologías empleadas. Primeramente comparándolas de forma separada y, finalmente, compararemos todos los métodos existentes en base a nuestros propósitos. Las técnicas de reconocimiento, el siguiente apartado, no es muy extenso puesto que, aunque haya pasos generales para el reconocimiento de objetos, cada objeto a reconocer es distinto por lo que no hay un método específico que se pueda generalizar. Pasamos ahora a las técnicas de detección de hojas mediante ordenador. Aquí usaremos la técnica explicada previamente explicada basada en las derivadas de las imágenes. La continuación de este paso sería diseccionar la hoja en diversos parámetros. Dependiendo de la fuente a la que se consulte pueden haber más o menos parámetros. Unos documentos aconsejan dividir la morfología de la hoja en 3 parámetros principales (Forma, Dentina y ramificación] y derivando de dichos parámetros convertirlos a 16 parámetros secundarios. La otra propuesta es dividir la morfología de la hoja en 5 parámetros principales (Diámetro, longitud fisiológica, anchura fisiológica, área y perímetro] y de ahí extraer 12 parámetros secundarios. Esta segunda propuesta es la más utilizada de todas por lo que es la que se utilizará. Pasamos al reconocimiento de hojas, en la cual nos hemos basado en un documento que provee un código fuente que cucando en los dos extremos de la hoja automáticamente nos dice a qué especie pertenece la hoja que estamos intentando reconocer. Para ello sólo hay que formar una base de datos. En los test realizados por el citado documento, nos aseguran que tiene un índice de acierto del 90.312% en 320 test en total (32 plantas insertadas en la base de datos por 10 test que se han realizado por cada una de las especies]. El siguiente apartado trata de la detección de cromosomas, en el cual se debe de pasar de la célula metafásica, donde los cromosomas están desorganizados, al cariotipo, que es como solemos ver los 23 cromosomas de forma ordenada. Hay dos tipos de técnicas para realizar este paso: Por el proceso de esquelotonización y barriendo ángulos. El proceso de esqueletonización consiste en eliminar los píxeles del interior del cromosoma para quedarse con su silueta; Este proceso es similar a los métodos de derivación de los píxeles pero se diferencia en que no detecta bordes si no que detecta el interior de los cromosomas. La segunda técnica consiste en ir barriendo ángulos desde el principio del cromosoma y teniendo en cuenta que un cromosoma no puede doblarse más de X grados detecta las diversas regiones de los cromosomas. Una vez tengamos el cariotipo, se continua con el reconocimiento de cromosomas. Para ello existe una técnica basada en las bandas de blancos y negros que tienen los cromosomas y que son las que los hacen únicos. Para ello el programa detecta los ejes longitudinales del cromosoma y reconstruye los perfiles de las bandas que posee el cromosoma y que lo identifican como único. En cuanto al trabajo que se podría desempeñar en el futuro, tenemos por lo general dos técnicas independientes que no unen la detección con el reconocimiento por lo que se habría de preparar un programa que uniese estas dos técnicas. Respecto a las hojas hemos visto que ambos métodos, detección y reconocimiento, están vinculados debido a que ambos comparten la opinión de dividir las hojas en 5 parámetros principales. El trabajo que habría que realizar sería el de crear un algoritmo que conectase a ambos ya que en el programa de reconocimiento se debe clicar a los dos extremos de la hoja por lo que no es una tarea automática. En cuanto a los cromosomas, se debería de crear un algoritmo que busque el inicio del cromosoma y entonces empiece a barrer ángulos para después poder dárselo al programa que busca los perfiles de bandas de los cromosomas. Finalmente, en el resumen se explica el por qué hace falta este tipo de investigación, esto es que con el calentamiento global, muchas de las especies (tanto animales como plantas] se están empezando a extinguir. Es por ello que se necesitará una base de datos que contemple todas las posibles especies tanto del reino animal como del reino vegetal. Para reconocer a una especie animal, simplemente bastará con tener sus 23 cromosomas; mientras que para reconocer a una especie vegetal, existen diversas formas. Aunque la más sencilla de todas es contar con la hoja de la especie puesto que es el elemento más fácil de escanear e introducir en el ordenador.