982 resultados para Interior point algorithm
Resumo:
Mitochondrial (M) and lipid droplet (L) volume density (vd) are often used in exercise research. Vd is the volume of muscle occupied by M and L. The means of calculating these percents are accomplished by applying a grid to a 2D image taken with transmission electron microscopy; however, it is not known which grid best predicts these values. PURPOSE: To determine the grid with the least variability of Mvd and Lvd in human skeletal muscle. METHODS: Muscle biopsies were taken from vastus lateralis of 10 healthy adults, trained (N=6) and untrained (N=4). Samples of 5-10mg were fixed in 2.5% glutaraldehyde and embedded in EPON. Longitudinal sections of 60 nm were cut and 20 images were taken at random at 33,000x magnification. Vd was calculated as the number of times M or L touched two intersecting grid lines (called a point) divided by the total number of points using 3 different sizes of grids with squares of 1000x1000nm sides (corresponding to 1µm2), 500x500nm (0.25µm2) and 250x250nm (0.0625µm2). Statistics included coefficient of variation (CV), 1 way-BS ANOVA and spearman correlations. RESULTS: Mean age was 67 ± 4 yo, mean VO2peak 2.29 ± 0.70 L/min and mean BMI 25.1 ± 3.7 kg/m2. Mean Mvd was 6.39% ± 0.71 for the 1000nm squares, 6.01% ± 0.70 for the 500nm and 6.37% ± 0.80 for the 250nm. Lvd was 1.28% ± 0.03 for the 1000nm, 1.41% ± 0.02 for the 500nm and 1.38% ± 0.02 for the 250nm. The mean CV of the three grids was 6.65% ±1.15 for Mvd with no significant differences between grids (P>0.05). Mean CV for Lvd was 13.83% ± 3.51, with a significant difference between the 1000nm squares and the two other grids (P<0.05). The 500nm squares grid showed the least variability between subjects. Mvd showed a positive correlation with VO2peak (r = 0.89, p < 0.05) but not with weight, height, or age. No correlations were found with Lvd. CONCLUSION: Different size grids have different variability in assessing skeletal muscle Mvd and Lvd. The grid size of 500x500nm (240 points) was more reliable than 1000x1000nm (56 points). 250x250nm (1023 points) did not show better reliability compared with the 500x500nm, but was more time consuming. Thus, choosing a grid with square size of 500x500nm seems the best option. This is particularly relevant as most grids used in the literature are either 100 points or 400 points without clear information on their square size.
Resumo:
The implicit projection algorithm of isotropic plasticity is extended to an objective anisotropic elastic perfectly plastic model. The recursion formula developed to project the trial stress on the yield surface, is applicable to any non linear elastic law and any plastic yield function.A curvilinear transverse isotropic model based on a quadratic elastic potential and on Hill's quadratic yield criterion is then developed and implemented in a computer program for bone mechanics perspectives. The paper concludes with a numerical study of a schematic bone-prosthesis system to illustrate the potential of the model.
Resumo:
SUMMARY : Eukaryotic DNA interacts with the nuclear proteins using non-covalent ionic interactions. Proteins can recognize specific nucleotide sequences based on the sterical interactions with the DNA and these specific protein-DNA interactions are the basis for many nuclear processes, e.g. gene transcription, chromosomal replication, and recombination. New technology termed ChIP-Seq has been recently developed for the analysis of protein-DNA interactions on a whole genome scale and it is based on immunoprecipitation of chromatin and high-throughput DNA sequencing procedure. ChIP-Seq is a novel technique with a great potential to replace older techniques for mapping of protein-DNA interactions. In this thesis, we bring some new insights into the ChIP-Seq data analysis. First, we point out to some common and so far unknown artifacts of the method. Sequence tag distribution in the genome does not follow uniform distribution and we have found extreme hot-spots of tag accumulation over specific loci in the human and mouse genomes. These artifactual sequence tags accumulations will create false peaks in every ChIP-Seq dataset and we propose different filtering methods to reduce the number of false positives. Next, we propose random sampling as a powerful analytical tool in the ChIP-Seq data analysis that could be used to infer biological knowledge from the massive ChIP-Seq datasets. We created unbiased random sampling algorithm and we used this methodology to reveal some of the important biological properties of Nuclear Factor I DNA binding proteins. Finally, by analyzing the ChIP-Seq data in detail, we revealed that Nuclear Factor I transcription factors mainly act as activators of transcription, and that they are associated with specific chromatin modifications that are markers of open chromatin. We speculate that NFI factors only interact with the DNA wrapped around the nucleosome. We also found multiple loci that indicate possible chromatin barrier activity of NFI proteins, which could suggest the use of NFI binding sequences as chromatin insulators in biotechnology applications. RESUME : L'ADN des eucaryotes interagit avec les protéines nucléaires par des interactions noncovalentes ioniques. Les protéines peuvent reconnaître les séquences nucléotidiques spécifiques basées sur l'interaction stérique avec l'ADN, et des interactions spécifiques contrôlent de nombreux processus nucléaire, p.ex. transcription du gène, la réplication chromosomique, et la recombinaison. Une nouvelle technologie appelée ChIP-Seq a été récemment développée pour l'analyse des interactions protéine-ADN à l'échelle du génome entier et cette approche est basée sur l'immuno-précipitation de la chromatine et sur la procédure de séquençage de l'ADN à haut débit. La nouvelle approche ChIP-Seq a donc un fort potentiel pour remplacer les anciennes techniques de cartographie des interactions protéine-ADN. Dans cette thèse, nous apportons de nouvelles perspectives dans l'analyse des données ChIP-Seq. Tout d'abord, nous avons identifié des artefacts très communs associés à cette méthode qui étaient jusqu'à présent insoupçonnés. La distribution des séquences dans le génome ne suit pas une distribution uniforme et nous avons constaté des positions extrêmes d'accumulation de séquence à des régions spécifiques, des génomes humains et de la souris. Ces accumulations des séquences artéfactuelles créera de faux pics dans toutes les données ChIP-Seq, et nous proposons différentes méthodes de filtrage pour réduire le nombre de faux positifs. Ensuite, nous proposons un nouvel échantillonnage aléatoire comme un outil puissant d'analyse des données ChIP-Seq, ce qui pourraient augmenter l'acquisition de connaissances biologiques à partir des données ChIP-Seq. Nous avons créé un algorithme d'échantillonnage aléatoire et nous avons utilisé cette méthode pour révéler certaines des propriétés biologiques importantes de protéines liant à l'ADN nommés Facteur Nucléaire I (NFI). Enfin, en analysant en détail les données de ChIP-Seq pour la famille de facteurs de transcription nommés Facteur Nucléaire I, nous avons révélé que ces protéines agissent principalement comme des activateurs de transcription, et qu'elles sont associées à des modifications de la chromatine spécifiques qui sont des marqueurs de la chromatine ouverte. Nous pensons que lés facteurs NFI interagir uniquement avec l'ADN enroulé autour du nucléosome. Nous avons également constaté plusieurs régions génomiques qui indiquent une éventuelle activité de barrière chromatinienne des protéines NFI, ce qui pourrait suggérer l'utilisation de séquences de liaison NFI comme séquences isolatrices dans des applications de la biotechnologie.
Resumo:
Este trabajo responde a la réplica de Leandro Prados de la Escosura acerca de mi estimación de la serie histórica del Producto Interior Bruto de España (R.E.A., 49 (XVII), 2009, págs. 5 a 45). Demuestra que el IPC es el mejor deflactor disponible para reconstruir el PIB antes de la CNE porque mide correctamente las variaciones interanuales del nivel general de precios. Emplea un procedimiento de enlace de la serie histórica de la contabilidad nacional que es consistente con los nuevos sistemas de cuentas nacionales SCN-93 y SEC-95 y coincide con los empleados por todos los organismos económicos internacionales. La actualización del sistema contable, con la inclusión de la economía sumergida y de la producción para uso final propio, ha dejado obsoletas las estimaciones anteriores y exige estimaciones de segunda generación, con la inevitable readaptación de los niveles relativos de las distintas economías.
Resumo:
BACKGROUND: Management of blood pressure (BP) in acute ischemic stroke is controversial. The present study aims to explore the association between baseline BP levels and BP change and outcome in the overall stroke population and in specific subgroups with regard to the presence of arterial hypertensive disease and prior antihypertensive treatment. METHODS: All patients registered in the Acute STroke Registry and Analysis of Lausanne (ASTRAL) between 2003 and 2009 were analyzed. Unfavorable outcome was defined as modified Rankin score more than 2. A local polynomial surface algorithm was used to assess the effect of BP values on outcome in the overall population and in predefined subgroups. RESULTS: Up to a certain point, as initial BP was increasing, optimal outcome was seen with a progressively more substantial BP decrease over the next 24-48 h. Patients without hypertensive disease and an initially low BP seemed to benefit from an increase of BP. In patients with hypertensive disease, initial BP and its subsequent changes seemed to have less influence on clinical outcome. Patients who were previously treated with antihypertensives did not tolerate initially low BPs well. CONCLUSION: Optimal outcome in acute ischemic stroke may be determined not only by initial BP levels but also by the direction and magnitude of associated BP change over the first 24-48 h.
Resumo:
A family of nonempty closed convex sets is built by using the data of the Generalized Nash equilibrium problem (GNEP). The sets are selected iteratively such that the intersection of the selected sets contains solutions of the GNEP. The algorithm introduced by Iusem-Sosa (2003) is adapted to obtain solutions of the GNEP. Finally some numerical experiments are given to illustrate the numerical behavior of the algorithm.
Resumo:
Treaty Establishing the European Community, operative until December 1st 2009, had already established in its article 2 the mission of the up until then European Community and actual European Union is to promote an harmonious, equilibrated and sustainable development of the economic activities of the whole Community. This Mission must be achieved by establishing a Common Market, an Economic and Monetary Union and the realization of Common Policies. One of the instruments to obtain these objectives is the use of free circulation of people, services and capitals inside the Common and Interior Market of the European Union. The European Union is characterized by the confirmation of the total movement of capitals, services and individuals and legal peoples’ freedom; freedom that was already predicated by the Maastricht Treaty, through the suppression of whatever obstacles which are in the way of the objectives before exposed. The old TEC in its Title III, now Title IV of the Treaty on the Functioning of the European Union, covered the free circulation of people, services and capitals. Consequently, the inclusion of this mechanism inside one of the regulating texts of the European Union indicates the importance this freedom supposes for the European Union objectives’ development. Once stood up the relevance of the free movement of people, services and capitals, we must mention that in this paper we are going to centre our study in one of these freedoms of movement: the free movement of capital. In order to analyze in detail the free movement of capital within the European framework, we are going to depart from the analysis of the existent case law of the Court of Justice of the European Union. The use of jurisprudence is basic to know how Community legislation is interpreted. For this reason, we are going to develop this work through judgements dictated by the European Union Court. This way we can observe how Member States’ regulating laws and the European Common Law affect the free movement of capital. The starting point of this paper will be the Judgement C-67/08 European Court of Justice of February 12th 2009, known as Block case. So, following the argumentation the Luxemburg Court did about the mentioned case, we are going to develop how free movement of capital could be affected by the current disparity of Member States’ legislation. This disparity can produce double taxation cases due to the lack of tax harmonized legislation within the interior market and the lack of treaties to avoid double taxation within the European Union. Developing this idea we are going to see how double taxation, at least indirectly, can infringe free movement of capital.
Resumo:
We study the concept of propagation connectivity on random 3-uniform hypergraphs. This concept is inspired by a simple linear time algorithm for solving instances of certain constraint satisfaction problems. We derive upper and lower bounds for the propagation connectivity threshold, and point out some algorithmic implications.
Stabilized Petrov-Galerkin methods for the convection-diffusion-reaction and the Helmholtz equations
Resumo:
We present two new stabilized high-resolution numerical methods for the convection–diffusion–reaction (CDR) and the Helmholtz equations respectively. The work embarks upon a priori analysis of some consistency recovery procedures for some stabilization methods belonging to the Petrov–Galerkin framework. It was found that the use of some standard practices (e.g. M-Matrices theory) for the design of essentially non-oscillatory numerical methods is not feasible when consistency recovery methods are employed. Hence, with respect to convective stabilization, such recovery methods are not preferred. Next, we present the design of a high-resolution Petrov–Galerkin (HRPG) method for the 1D CDR problem. The problem is studied from a fresh point of view, including practical implications on the formulation of the maximum principle, M-Matrices theory, monotonicity and total variation diminishing (TVD) finite volume schemes. The current method is next in line to earlier methods that may be viewed as an upwinding plus a discontinuity-capturing operator. Finally, some remarks are made on the extension of the HRPG method to multidimensions. Next, we present a new numerical scheme for the Helmholtz equation resulting in quasi-exact solutions. The focus is on the approximation of the solution to the Helmholtz equation in the interior of the domain using compact stencils. Piecewise linear/bilinear polynomial interpolation are considered on a structured mesh/grid. The only a priori requirement is to provide a mesh/grid resolution of at least eight elements per wavelength. No stabilization parameters are involved in the definition of the scheme. The scheme consists of taking the average of the equation stencils obtained by the standard Galerkin finite element method and the classical finite difference method. Dispersion analysis in 1D and 2D illustrate the quasi-exact properties of this scheme. Finally, some remarks are made on the extension of the scheme to unstructured meshes by designing a method within the Petrov–Galerkin framework.
Resumo:
Coltop3D is a software that performs structural analysis by using digital elevation model (DEM) and 3D point clouds acquired with terrestrial laser scanners. A color representation merging slope aspect and slope angle is used in order to obtain a unique code of color for each orientation of a local slope. Thus a continuous planar structure appears in a unique color. Several tools are included to create stereonets, to draw traces of discontinuities, or to compute automatically density stereonet. Examples are shown to demonstrate the efficiency of the method.
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."
Resumo:
Hypoglycemia, if recurrent, may have severe consequences on cognitive and psychomotor development of neonates. Therefore, screening for hypoglycemia is a daily routine in every facility taking care of newborn infants. Point-of-care-testing (POCT) devices are interesting for neonatal use, as their handling is easy, measurements can be performed at bedside, demanded blood volume is small and results are readily available. However, such whole blood measurements are challenged by a wide variation of hematocrit in neonates and a spectrum of normal glucose concentration at the lower end of the test range. We conducted a prospective trial to check precision and accuracy of the best suitable POCT device for neonatal use from three leading companies in Europe. Of the three devices tested (Precision Xceed, Abbott; Elite XL, Bayer; Aviva Nano, Roche), Aviva Nano exhibited the best precision. None completely fulfilled the ISO-accuracy-criteria 15197: 2003 or 2011. Aviva Nano fulfilled these criteria in 92% of cases while the others were <87%. Precision Xceed reached the 95% limit of the 2003 ISO-criteria for values ≤4.2 mmol/L, but not for the higher range (71%). Although validated for adults, new POCT devices need to be specifically evaluated on newborn infants before adopting their routine use in neonatology.
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt"