921 resultados para radial basis function network
Resumo:
A Digital Elevation Model (DEM) provides the information basis used for many geographic applications such as topographic and geomorphologic studies, landscape through GIS (Geographic Information Systems) among others. The DEM capacity to represent Earth?s surface depends on the surface roughness and the resolution used. Each DEM pixel depends on the scale used characterized by two variables: resolution and extension of the area studied. DEMs can vary in resolution and accuracy by the production method, although there are statistical characteristics that keep constant or very similar in a wide range of scales. Based on this property, several techniques have been applied to characterize DEM through multiscale analysis directly related to fractal geometry: multifractal spectrum and the structure function. The comparison of the results by both methods is discussed. The study area is represented by a 1024 x 1024 data matrix obtained from a DEM with a resolution of 10 x 10 m each point, which correspond with a region known as ?Monte de El Pardo? a property of Spanish National Heritage (Patrimonio Nacional Español) of 15820 Ha located to a short distance from the center of Madrid. Manzanares River goes through this area from North to South. In the southern area a reservoir is found with a capacity of 43 hm3, with an altitude of 603.3 m till 632 m when it is at the highest capacity. In the middle of the reservoir the minimum altitude of this area is achieved.
Resumo:
Protein interaction networks have become a tool to study biological processes, either for predicting molecular functions or for designing proper new drugs to regulate the main biological interactions. Furthermore, such networks are known to be organized in sub-networks of proteins contributing to the same cellular function. However, the protein function prediction is not accurate and each protein has traditionally been assigned to only one function by the network formalism. By considering the network of the physical interactions between proteins of the yeast together with a manual and single functional classification scheme, we introduce a method able to reveal important information on protein function, at both micro- and macro-scale. In particular, the inspection of the properties of oscillatory dynamics on top of the protein interaction network leads to the identification of misclassification problems in protein function assignments, as well as to unveil correct identification of protein functions. We also demonstrate that our approach can give a network representation of the meta-organization of biological processes by unraveling the interactions between different functional classes
Resumo:
In this work we present the assessment of the structural and piezoelectric properties of Al(0.5-x)TixN0.5 compounds (titanium content menor que6% atomic), which are expected to possess improved properties than conventional AlN films, such as larger piezoelectric activity, thermal stability of frequency and temperature resistance. Al:Ti:N films were deposited from a twin concentric target of Al and Ti by reactive AC sputtering, which provided films with a radial gradient of the Ti concentration. The properties of the films were investigated as a function of their composition, which was measured by electron dispersive energy dispersive X-ray spectroscopy and Rutherford backscattering spectrometry. The microstructure and morphology of the films were assessed by X-ray diffraction and infrared reflectance. Their electroacoustic properties and dielectric constant were derived from the frequency response of BAW test resonators. Al:Ti:N films properties appear to be strongly dependent on the Ti content, which modifies the AlN wurtzite crystal structure leading to greater dielectric constant, lower sound velocities, lower electromechanical factor and moderately improved temperature coefficient of the resonant frequency.
Resumo:
A connectivity function defined by the 3D-Euler number, is a topological indicator and can be related to hydraulic properties (Vogel and Roth, 2001). This study aims to develop connectivity Euler indexes as indicators of the ability of soils for fluid percolation. The starting point was a 3D grey image acquired by X-ray computed tomography of a soil at bulk density of 1.2 mg cm-3. This image was used in the simulation of 40000 particles following a directed random walk algorithms with 7 binarization thresholds. These data consisted of 7 files containing the simulated end points of the 40000 random walks, obtained in Ruiz-Ramos et al. (2010). MATLAB software was used for computing the frequency matrix of the number of particles arriving at every end point of the random walks and their 3D representation.
Resumo:
The study of temperature gradients in cold stores and containers is a critical issue in the food industry for the quality assurance of products during transport, as well as forminimizing losses. The objective of this work is to develop a new methodology of data analysis based on phase space graphs of temperature and enthalpy, collected by means of multidistributed, low cost and autonomous wireless sensors and loggers. A transoceanic refrigerated transport of lemons in a reefer container ship from Montevideo (Uruguay) to Cartagena (Spain) was monitored with a network of 39 semi-passive TurboTag RFID loggers and 13 i-button loggers. Transport included intermodal transit from transoceanic to short shipping vessels and a truck trip. Data analysis is carried out using qualitative phase diagrams computed on the basis of Takens?Ruelle reconstruction of attractors. Fruit stress is quantified in terms of the phase diagram area which characterizes the cyclic behaviour of temperature. Areas within the enthalpy phase diagram computed for the short sea shipping transport were 5 times higher than those computed for the long sea shipping, with coefficients of variation above 100% for both periods. This new methodology for data analysis highlights the significant heterogeneity of thermohygrometric conditions at different locations in the container.
Resumo:
Soy protein isolate is typical vegetable protein with health-enhancing activities. Inulin, a prebiotic no digestible carbohydrate, has functional properties. A mashed potato serving of 200 g with added soy protein isolate and inulin concentrations of 15?60 g kg provides from 3 to 12 g of soy protein isolate and/or inulin, respectively. Currently, no information is available about the possible texture-modifying effect of this non-ionizable polar carbohydrate in different soy-based food systems. In this study, the effect of the addition of soy protein isolate and inulin blends at different soy protein isolate: inulin ratios on the degree of inulin polymerization and the rheological and structural properties of fresh mashed and frozen/thawed mashed potatoes were evaluated. The inulin chemical structure remained intact throughout the various treatments, and soy protein isolate did not affect inulin composition being a protein compatible with this fructan. Small-strain rheology showed that both ingredients behaved like soft fillers. In the frozen/thawed mashed potatoes samples,0 addition of 30 : 30 and 15 : 60 blend ratios significantly increased elasticity (G value) compared with 0 : 0 control, consequently reducing the freeze/thaw stability conferred by the cryoprotectants. Inulin crystallites caused a significant strengthening effect on soy protein isolate gel. Micrographs revealed that soy protein isolate supports the inulin structure by building up a second fine-stranded network. Thereby, possibility of using soy protein isolate and inulin in combination with mashed potatoes to provide a highly nutritious and healthy product is promising.
Resumo:
Analysis of big amount of data is a field with many years of research. It is centred in getting significant values, to make it easier to understand and interpret data. Being the analysis of interdependence between time series an important field of research, mainly as a result of advances in the characterization of dynamical systems from the signals they produce. In the medicine sphere, it is easy to find many researches that try to understand the brain behaviour, its operation mode and its internal connections. The human brain comprises approximately 1011 neurons, each of which makes about 103 synaptic connections. This huge number of connections between individual processing elements provides the fundamental substrate for neuronal ensembles to become transiently synchronized or functionally connected. A similar complex network configuration and dynamics can also be found at the macroscopic scales of systems neuroscience and brain imaging. The emergence of dynamically coupled cell assemblies represents the neurophysiological substrate for cognitive function such as perception, learning, thinking. Understanding the complex network organization of the brain on the basis of neuroimaging data represents one of the most impervious challenges for systems neuroscience. Brain connectivity is an elusive concept that refers to diferent interrelated aspects of brain organization: structural, functional connectivity (FC) and efective connectivity (EC). Structural connectivity refers to a network of physical connections linking sets of neurons, it is the anatomical structur of brain networks. However, FC refers to the statistical dependence between the signals stemming from two distinct units within a nervous system, while EC refers to the causal interactions between them. This research opens the door to try to resolve diseases related with the brain, like Parkinson’s disease, senile dementia, mild cognitive impairment, etc. One of the most important project associated with Alzheimer’s research and other diseases are enclosed in the European project called Blue Brain. The center for Biomedical Technology (CTB) of Universidad Politecnica de Madrid (UPM) forms part of the project. The CTB researches have developed a magnetoencephalography (MEG) data processing tool that allow to visualise and analyse data in an intuitive way. This tool receives the name of HERMES, and it is presented in this document. Analysis of big amount of data is a field with many years of research. It is centred in getting significant values, to make it easier to understand and interpret data. Being the analysis of interdependence between time series an important field of research, mainly as a result of advances in the characterization of dynamical systems from the signals they produce. In the medicine sphere, it is easy to find many researches that try to understand the brain behaviour, its operation mode and its internal connections. The human brain comprises approximately 1011 neurons, each of which makes about 103 synaptic connections. This huge number of connections between individual processing elements provides the fundamental substrate for neuronal ensembles to become transiently synchronized or functionally connected. A similar complex network configuration and dynamics can also be found at the macroscopic scales of systems neuroscience and brain imaging. The emergence of dynamically coupled cell assemblies represents the neurophysiological substrate for cognitive function such as perception, learning, thinking. Understanding the complex network organization of the brain on the basis of neuroimaging data represents one of the most impervious challenges for systems neuroscience. Brain connectivity is an elusive concept that refers to diferent interrelated aspects of brain organization: structural, functional connectivity (FC) and efective connectivity (EC). Structural connectivity refers to a network of physical connections linking sets of neurons, it is the anatomical structur of brain networks. However, FC refers to the statistical dependence between the signals stemming from two distinct units within a nervous system, while EC refers to the causal interactions between them. This research opens the door to try to resolve diseases related with the brain, like Parkinson’s disease, senile dementia, mild cognitive impairment, etc. One of the most important project associated with Alzheimer’s research and other diseases are enclosed in the European project called Blue Brain. The center for Biomedical Technology (CTB) of Universidad Politecnica de Madrid (UPM) forms part of the project. The CTB researches have developed a magnetoencephalography (MEG) data processing tool that allow to visualise and analyse data in an intuitive way. This tool receives the name of HERMES, and it is presented in this document.
Resumo:
he nitrogen content dependence of the electronic properties for copper nitride thin films with an atomic percentage of nitrogen ranging from 26 ± 2 to 33 ± 2 have been studied by means of optical (spectroscopic ellipsometry), thermoelectric (Seebeck), and electrical resistivity measurements. The optical spectra are consistent with direct optical transitions corresponding to the stoichiometric semiconductor Cu3N plus a free-carrier contribution, essentially independent of temperature, which can be tuned in accordance with the N-excess. Deviation of the N content from stoichiometry drives to significant decreases from − 5 to − 50 μV/K in the Seebeck coefficient and to large enhancements, from 10− 3 up to 10 Ω cm, in the electrical resistivity. Band structure and density of states calculations have been carried out on the basis of the density functional theory to account for the experimental results.
Resumo:
Bayesian network classifiers are a powerful machine learning tool. In order to evaluate the expressive power of these models, we compute families of polynomials that sign-represent decision functions induced by Bayesian network classifiers. We prove that those families are linear combinations of products of Lagrange basis polynomials. In absence of V-structures in the predictor sub-graph, we are also able to prove that this family of polynomials does in- deed characterize the specific classifier considered. We then use this representation to bound the number of decision functions representable by Bayesian network classifiers with a given structure and we compare these bounds to the ones obtained using Vapnik-Chervonenkis dimension.
Resumo:
Many researchers have used theoretical or empirical measures to assess social benefits in transport policy implementation. However, few have measured social benefits by using discount rates, including the intertemporal preference rate of users, the private investment discount rate, and the intertemporal preference rate of the government. In general, the social discount rate used is the same for all social actors. This paper aims to assess a new method by integrating different types of discount rates belonging to different social actors to measure the real benefits of each actor in the short term, medium term, and long term. A dynamic simulation is provided by a strategic land use and transport interaction model. The method was tested by optimizing a cordon toll scheme in Madrid, Spain. Socioeconomic efficiency and environmental criteria were considered. On the basis of the modified social welfare function, the effects on the measure of social benefits were estimated and compared with the classical welfare function measures. The results show that the use of more suitable discount rates for each social actor had an effect on the selection and definition of optimal strategy of congestion pricing. The usefulness of the measure of congestion toll declines more quickly over time. This result could be the key to understanding the relationship between transport system policies and the distribution of social actors? benefits in a metropolitan context.
Resumo:
The paper focuses on the analysis of radial-gated spillways, which is carried out by the solution of a numerical model based on the finite element method (FEM). The Oliana Dam is considered as a case study and the discharge capacity is predicted both by the application of a level-set-based free-surface solver and by the use of traditional empirical formulations. The results of the analysis are then used for training an artificial neural network to allow real-time predictions of the discharge in any situation of energy head and gate opening within the operation range of the reservoir. The comparison of the results obtained with the different methods shows that numerical models such as the FEM can be useful as a predictive tool for the analysis of the hydraulic performance of radial-gated spillways.
Resumo:
En este trabajo se han analizado varios problemas en el contexto de la elasticidad no lineal basándose en modelos constitutivos representativos. En particular, se han analizado problemas relacionados con el fenómeno de perdida de estabilidad asociada con condiciones de contorno en el caso de material reforzados con fibras. Cada problema se ha formulado y se ha analizado por separado en diferentes capítulos. En primer lugar se ha mostrado el análisis del gradiente de deformación discontinuo para un material transversalmente isótropo, en particular, el modelo del material considerado consiste de una base neo-Hookeana isótropa incrustada con fibras de refuerzo direccional caracterizadas con un solo parámetro. La solución de este problema se vincula con instabilidades que dan lugar al mecanismo de fallo conocido como banda de cortante. La perdida de elipticidad de las ecuaciones diferenciales de equilibrio es una condición necesaria para que aparezca este tipo de soluciones y por tanto las inestabilidades asociadas. En segundo lugar se ha analizado una deformación combinada de extensión, inación y torsión de un tubo cilíndrico grueso donde se ha encontrado que la deformación citada anteriormente puede ser controlada solo para determinadas direcciones de las fibras refuerzo. Para entender el comportamiento elástico del tubo considerado se ha ilustrado numéricamente los resultados obtenidos para las direcciones admisibles de las fibras de refuerzo bajo la deformación considerada. En tercer lugar se ha estudiado el caso de un tubo cilíndrico grueso reforzado con dos familias de fibras sometido a cortante en la dirección azimutal para un modelo de refuerzo especial. En este problema se ha encontrado que las inestabilidades que aparecen en el material considerado están asociadas con lo que se llama soluciones múltiples de la ecuación diferencial de equilibrio. Se ha encontrado que el fenómeno de instabilidad ocurre en un estado de deformación previo al estado de deformación donde se pierde la elipticidad de la ecuación diferencial de equilibrio. También se ha demostrado que la condición de perdida de elipticidad y ^W=2 = 0 (la segunda derivada de la función de energía con respecto a la deformación) son dos condiciones necesarias para la existencia de soluciones múltiples. Finalmente, se ha analizado detalladamente en el contexto de elipticidad un problema de un tubo cilíndrico grueso sometido a una deformación combinada en las direcciones helicoidal, axial y radial para distintas geotermias de las fibras de refuerzo . In the present work four main problems have been addressed within the framework of non-linear elasticity based on representative constitutive models. Namely, problems related to the loss of stability phenomena associated with boundary value problems for fibre-reinforced materials. Each of the considered problems is formulated and analysed separately in different chapters. We first start with the analysis of discontinuous deformation gradients for a transversely isotropic material under plane deformation. In particular, the material model is an augmented neo-Hookean base with a simple unidirectional reinforcement characterised by a single parameter. The solution of this problem is related to material instabilities and it is associated with a shear band-type failure mode. The loss of ellipticity of the governing differential equations is a necessary condition for the existence of these material instabilities. The second problem involves a detailed analysis of the combined non-linear extension, inflation and torsion of a thick-walled circular cylindrical tube where it has been found that the aforementioned deformation is controllable only for certain preferred directions of transverse isotropy. Numerical results have been illustrated to understand the elastic behaviour of the tube for the admissible preferred directions under the considered deformation. The third problem deals with the analysis of a doubly fibre-reinforced thickwalled circular cylindrical tube undergoing pure azimuthal shear for a special class of the reinforcing model where multiple non-smooth solutions emerge. The associated instability phenomena are found to occur prior to the point where the nominal stress tensor changes monotonicity in a particular direction. It has been also shown that the loss of ellipticity condition that arises from the equilibrium equation and ^W=2 = 0 (the second derivative of the strain-energy function with respect to the deformation) are equivalent necessary conditions for the emergence of multiple solutions for the considered material. Finally, a detailed analysis in the basis of the loss of ellipticity of the governing differential equations for a combined helical, axial and radial elastic deformations of a fibre-reinforced circular cylindrical tube is carried out.
Resumo:
Bayesian network classifiers are a powerful machine learning tool. In order to evaluate the expressive power of these models, we compute families of polynomials that sign-represent decision functions induced by Bayesian network classifiers. We prove that those families are linear combinations of products of Lagrange basis polynomials. In absence of V-structures in the predictor sub-graph, we are also able to prove that this family of polynomials does in- deed characterize the specific classifier considered. We then use this representation to bound the number of decision functions representable by Bayesian network classifiers with a given structure and we compare these bounds to the ones obtained using Vapnik-Chervonenkis dimension.
Resumo:
En general, la distribución de una flota de vehículos que recorre rutas fijas no se realiza completamente en base a criterios objetivos, primando otros aspectos más difícilmente cuantificables. El análisis apropiado debería tener en consideración la variabilidad existente entre las diferentes rutas dentro de una misma ciudad para así determinar qué tecnología es la que mejor se adapta a las características de cada itinerario. Este trabajo presenta una metodología para optimizar la asignación de una flota de vehículos a sus rutas, consiguiendo reducir el consumo y las emisiones contaminantes. El método propuesto está organizado según el siguiente procedimiento: - Registro de las características cinemáticas de los vehículos que recorren un conjunto representativo de rutas. - Agrupamiento de las líneas en conglomerados de líneas similares empleando un algoritmo jerárquico que optimice el índice de semejanza entre rutas obtenido mediante contraste de hipótesis de las variables representativas. - Generación de un ciclo cinemático específico para cada conglomerado. - Tipificación de variables macroscópicas que faciliten la clasificación de las restantes líneas utilizando una red neuronal entrenada con la información recopilada en las rutas medidas. - Conocimiento de las características de la flota disponible. - Disponibilidad de un modelo que estime, según la tecnología del vehículo, el consumo y las emisiones asociados a las variables cinemáticas de los ciclos. - Desarrollo de un algoritmo de reasignación de vehículos que optimice una función objetivo dependiente de las emisiones. En el proceso de optimización de la flota se plantean dos escenarios de gran trascendencia en la evaluación ambiental, consistentes en minimizar la emisión de dióxido de carbono y su impacto como gas de efecto invernadero (GEI), y alternativamente, la producción de nitróxidos, por su influencia en la lluvia ácida y en la formación de ozono troposférico en núcleos urbanos. Además, en ambos supuestos se introducen en el problema restricciones adicionales para evitar que las emisiones de las restantes sustancias superen los valores estipulados según la organización de la flota actualmente realizada por el operador. La metodología ha sido aplicada en 160 líneas de autobús de la EMT de Madrid, conociéndose los datos cinemáticos de 25 rutas. Los resultados indican que, en ambos supuestos, es factible obtener una redistribución de la flota que consiga reducir significativamente la mayoría de las sustancias contaminantes, evitando que, en contraprestación, aumente la emisión de cualquier otro contaminante. ABSTRACT In general, the distribution of a fleet of vehicles that travel fixed routes is not usually implemented on the basis of objective criteria, thus prioritizing on other features that are more difficult to quantify. The appropriate analysis should consider the existing variability amongst the different routes within the city in order to determine which technology adapts better to the peculiarities of each itinerary. This study proposes a methodology to optimize the allocation of a fleet of vehicles to the routes in order to reduce fuel consumption and pollutant emissions. The suggested method is structured in accordance with the following procedure: - Recording of the kinematic characteristics of the vehicles that travel a representative set of routes. - Grouping of the lines in clusters of similar routes by utilizing a hierarchical algorithm that optimizes the similarity index between routes, which has been previously obtained by means of hypothesis contrast based on a set of representative variables. - Construction of a specific kinematic cycle to represent each cluster of routes. - Designation of macroscopic variables that allow the classification of the remaining lines using a neural network trained with the information gathered from a sample of routes. - Identification and comprehension of the operational characteristics of the existing fleet. - Availability of a model that evaluates, in accordance with the technology of the vehicle, the fuel consumption and the emissions related with the kinematic variables of the cycles. - Development of an algorithm for the relocation of the vehicle fleet by optimizing an objective function which relies on the values of the pollutant emissions. Two scenarios having great relevance in environmental evaluation are assessed during the optimization process of the fleet, these consisting in minimizing carbon dioxide emissions due to its impact as greenhouse gas (GHG), and alternatively, the production of nitroxides for their influence on acid rain and in the formation of tropospheric ozone in urban areas. Furthermore, additional restrictions are introduced in both assumptions in order to prevent that emission levels for the remaining substances exceed the stipulated values for the actual fleet organization implemented by the system operator. The methodology has been applied in 160 bus lines of the EMT of Madrid, for which kinematic information is known for a sample consisting of 25 routes. The results show that, in both circumstances, it is feasible to obtain a redistribution of the fleet that significantly reduces the emissions for the majority of the pollutant substances, while preventing an alternative increase in the emission level of any other contaminant.
Resumo:
El primer procesamiento estricto realizado con el software científico Bernese y contemplando las más estrictas normas de cálculo recomendadas internacionalmente, permitió obtener un campo puntual de alta exactitud, basado en la integración y estandarización de los datos de una red GPS ubicada en Costa Rica. Este procesamiento contempló un total de 119 semanas de datos diarios, es decir unos 2,3 años, desde enero del año 2009 hasta abril del año 2011, para un total de 30 estaciones GPS, de las cuales 22 están ubicadas en el territorio nacional de Costa Rica y 8 internaciones pertenecientes a la red del Sistema Geocéntrico para las Américas (SIRGAS). Las denominadas soluciones semilibres generaron, semana a semana, una red GPS con una alta exactitud interna definida por medio de los vectores entre las estaciones y las coordenadas finales de la constelación satelital. La evaluación semanal dada por la repetibilidad de las soluciones brindó en promedio errores de 1,7 mm, 1,4 mm y 5,1 mm en las componentes [n e u], confirmando una alta consistencia en estas soluciones. Aunque las soluciones semilibres poseen una alta exactitud interna, las mismas no son utilizables para fines de análisis cinemático, pues carecen de un marco de referencia. En Latinoamérica, la densificación del Marco Internacional Terrestre de Referencia (ITRF), está representado por la red de estaciones de operación continua GNSS de SIRGAS, denominada como SIRGAS-CON. Por medio de las denominadas coordenadas semanales finales de las 8 estaciones consideradas como vínculo, se refirió cada una de las 119 soluciones al marco SIRGAS. La introducción del marco de referencia SIRGAS a las soluciones semilibres produce deformaciones en estas soluciones. Las deformaciones de las soluciones semilibres son producto de las cinemática de cada una de las placas en las que se ubican las estaciones de vínculo. Luego de efectuado el amarre semanal a las coordenadas SIRGAS, se hizo una estimación de los vectores de velocidad de cada una de las estaciones, incluyendo las de amarre, cuyos valores de velocidad se conocen con una alta exactitud. Para la determinación de las velocidades de las estaciones costarricenses, se programó una rutina en ambiente MatLab, basada en una ajuste por mínimos cuadrados. Los valores obtenidos en el marco de este proyecto en comparación con los valores oficiales, brindaron diferencias promedio del orden de los 0,06 cm/a, -0,08 cm/a y -0,10 cm/a respectivamente para las coordenadas [X Y Z]. De esta manera se logró determinar las coordenadas geocéntricas [X Y Z]T y sus variaciones temporales [vX vY vZ]T para el conjunto de 22 estaciones GPS de Costa Rica, dentro del datum IGS05, época de referencia 2010,5. Aunque se logró una alta exactitud en los vectores de coordenadas geocéntricas de las 22 estaciones, para algunas de las estaciones el cálculo de las velocidades no fue representativo debido al relativo corto tiempo (menos de un año) de archivos de datos. Bajo esta premisa, se excluyeron las ocho estaciones ubicadas al sur de país. Esto implicó hacer una estimación del campo local de velocidades con solamente veinte estaciones nacionales más tres estaciones en Panamá y una en Nicaragua. El algoritmo usado fue el denominado Colocación por Mínimos Cuadrados, el cual permite la estimación o interpolación de datos a partir de datos efectivamente conocidos, el cual fue programado mediante una rutina en ambiente MatLab. El campo resultante se estimó con una resolución de 30' X 30' y es altamente constante, con una velocidad resultante promedio de 2,58 cm/a en una dirección de 40,8° en dirección noreste. Este campo fue validado con base en los datos del modelo VEMOS2009, recomendado por SIRGAS. Las diferencias de velocidad promedio para las estaciones usadas como insumo para el cálculo del campo fueron del orden los +0,63 cm/a y +0,22 cm/a para los valores de velocidad en latitud y longitud, lo que supone una buena determinación de los valores de velocidad y de la estimación de la función de covarianza empírica, necesaria para la aplicación del método de colocación. Además, la grilla usada como base para la interpolación brindó diferencias del orden de -0,62 cm/a y -0,12 cm/a para latitud y longitud. Adicionalmente los resultados de este trabajo fueron usados como insumo para hacer una aproximación en la definición del límite del llamado Bloque de Panamá dentro del territorio nacional de Costa Rica. El cálculo de las componentes del Polo de Euler por medio de una rutina programa en ambiente MatLab y aplicado a diferentes combinaciones de puntos no brindó mayores aportes a la definición física de este límite. La estrategia lo que confirmó fue simplemente la diferencia en la dirección de todos los vectores velocidad y no permitió reveló revelar con mayor detalle una ubicación de esta zona dentro del territorio nacional de Costa Rica. ABSTRACT The first strict processing performed with the Bernese scientific software and contemplating the highest standards internationally recommended calculation, yielded a precise field of high accuracy, based on the integration and standardization of data from a GPS network located in Costa Rica. This processing watched a total of 119 weeks of daily data, is about 2.3 years from January 2009 to April 2011, for a total of 30 GPS stations, of which 22 are located in the country of Costa Rica and 8 hospitalizations within the network of Geocentric System for the Americas (SIRGAS). The semi-free solutions generated, every week a GPS network with high internal accuracy defined by vectors between stations and the final coordinates of the satellite constellation. The weekly evaluation given by repeatability of the solutions provided in average errors of 1.7 mm 1.4 mm and 5.1 mm in the components [n e u], confirming a high consistency in these solutions. Although semi-free solutions have a high internal accuracy, they are not used for purposes of kinematic analysis, because they lack a reference frame. In Latin America, the densification of the International Terrestrial Reference Frame (ITRF), is represented by a network of continuously operating GNSS stations SIRGAS, known as SIRGAS-CON. Through weekly final coordinates of the 8 stations considered as a link, described each of the solutions to the frame 119 SIRGAS. The introduction of the frame SIRGAS to semi-free solutions generates deformations. The deformations of the semi-free solutions are products of the kinematics of each of the plates in which link stations are located. After SIRGAS weekly link to SIRGAS frame, an estimate of the velocity vectors of each of the stations was done. The velocity vectors for each SIRGAS stations are known with high accuracy. For this calculation routine in MatLab environment, based on a least squares fit was scheduled. The values obtained compared to the official values, gave average differences of the order of 0.06 cm/yr, -0.08 cm/yr and -0.10 cm/yr respectively for the coordinates [XYZ]. Thus was possible to determine the geocentric coordinates [XYZ]T and its temporal variations [vX vY vZ]T for the set of 22 GPS stations of Costa Rica, within IGS05 datum, reference epoch 2010.5. The high accuracy vector for geocentric coordinates was obtained, however for some stations the velocity vectors was not representative because of the relatively short time (less than one year) of data files. Under this premise, the eight stations located in the south of the country were excluded. This involved an estimate of the local velocity field with only twenty national stations plus three stations in Panama and Nicaragua. The algorithm used was Least Squares Collocation, which allows the estimation and interpolation of data from known data effectively. The algorithm was programmed with MatLab. The resulting field was estimated with a resolution of 30' X 30' and is highly consistent with a resulting average speed of 2.58 cm/y in a direction of 40.8° to the northeast. This field was validated based on the model data VEMOS2009 recommended by SIRGAS. The differences in average velocity for the stations used as input for the calculation of the field were of the order of +0.63 cm/yr, +0.22 cm/yr for the velocity values in latitude and longitude, which is a good determination velocity values and estimating the empirical covariance function necessary for implementing the method of application. Furthermore, the grid used as the basis for interpolation provided differences of about -0.62 cm/yr, -0.12 cm/yr to latitude and longitude. Additionally, the results of this investigation were used as input to an approach in defining the boundary of Panama called block within the country of Costa Rica. The calculation of the components of the Euler pole through a routine program in MatLab and applied to different combinations of points gave no further contributions to the physical definition of this limit. The strategy was simply confirming the difference in the direction of all the velocity vectors and not allowed to reveal more detail revealed a location of this area within the country of Costa Rica.