946 resultados para Fundamentals in linear algebra
Resumo:
La informática teórica es una disciplina básica ya que la mayoría de los avances en informática se sustentan en un sólido resultado de esa materia. En los últimos a~nos debido tanto al incremento de la potencia de los ordenadores, como a la cercanía del límite físico en la miniaturización de los componentes electrónicos, resurge el interés por modelos formales de computación alternativos a la arquitectura clásica de von Neumann. Muchos de estos modelos se inspiran en la forma en la que la naturaleza resuelve eficientemente problemas muy complejos. La mayoría son computacionalmente completos e intrínsecamente paralelos. Por este motivo se les está llegando a considerar como nuevos paradigmas de computación (computación natural). Se dispone, por tanto, de un abanico de arquitecturas abstractas tan potentes como los computadores convencionales y, a veces, más eficientes: alguna de ellas mejora el rendimiento, al menos temporal, de problemas NPcompletos proporcionando costes no exponenciales. La representación formal de las redes de procesadores evolutivos requiere de construcciones, tanto independientes, como dependientes del contexto, dicho de otro modo, en general una representación formal completa de un NEP implica restricciones, tanto sintácticas, como semánticas, es decir, que muchas representaciones aparentemente (sintácticamente) correctas de casos particulares de estos dispositivos no tendrían sentido porque podrían no cumplir otras restricciones semánticas. La aplicación de evolución gramatical semántica a los NEPs pasa por la elección de un subconjunto de ellos entre los que buscar los que solucionen un problema concreto. En este trabajo se ha realizado un estudio sobre un modelo inspirado en la biología celular denominado redes de procesadores evolutivos [55, 53], esto es, redes cuyos nodos son procesadores muy simples capaces de realizar únicamente un tipo de mutación puntual (inserción, borrado o sustitución de un símbolo). Estos nodos están asociados con un filtro que está definido por alguna condición de contexto aleatorio o de pertenencia. Las redes están formadas a lo sumo de seis nodos y, teniendo los filtros definidos por una pertenencia a lenguajes regulares, son capaces de generar todos los lenguajes enumerables recursivos independientemente del grafo subyacente. Este resultado no es sorprendente ya que semejantes resultados han sido documentados en la literatura. Si se consideran redes con nodos y filtros definidos por contextos aleatorios {que parecen estar más cerca a las implementaciones biológicas{ entonces se pueden generar lenguajes más complejos como los lenguajes no independientes del contexto. Sin embargo, estos mecanismos tan simples son capaces de resolver problemas complejos en tiempo polinomial. Se ha presentado una solución lineal para un problema NP-completo, el problema de los 3-colores. Como primer aporte significativo se ha propuesto una nueva dinámica de las redes de procesadores evolutivos con un comportamiento no determinista y masivamente paralelo [55], y por tanto todo el trabajo de investigación en el área de la redes de procesadores se puede trasladar a las redes masivamente paralelas. Por ejemplo, las redes masivamente paralelas se pueden modificar de acuerdo a determinadas reglas para mover los filtros hacia las conexiones. Cada conexión se ve como un canal bidireccional de manera que los filtros de entrada y salida coinciden. A pesar de esto, estas redes son computacionalmente completas. Se pueden también implementar otro tipo de reglas para extender este modelo computacional. Se reemplazan las mutaciones puntuales asociadas a cada nodo por la operación de splicing. Este nuevo tipo de procesador se denomina procesador splicing. Este modelo computacional de Red de procesadores con splicing ANSP es semejante en cierto modo a los sistemas distribuidos en tubos de ensayo basados en splicing. Además, se ha definido un nuevo modelo [56] {Redes de procesadores evolutivos con filtros en las conexiones{ , en el cual los procesadores tan solo tienen reglas y los filtros se han trasladado a las conexiones. Dicho modelo es equivalente, bajo determinadas circunstancias, a las redes de procesadores evolutivos clásicas. Sin dichas restricciones el modelo propuesto es un superconjunto de los NEPs clásicos. La principal ventaja de mover los filtros a las conexiones radica en la simplicidad de la modelización. Otras aportaciones de este trabajo ha sido el dise~no de un simulador en Java [54, 52] para las redes de procesadores evolutivos propuestas en esta Tesis. Sobre el término "procesador evolutivo" empleado en esta Tesis, el proceso computacional descrito aquí no es exactamente un proceso evolutivo en el sentido Darwiniano. Pero las operaciones de reescritura que se han considerado pueden interpretarse como mutaciones y los procesos de filtrado se podrían ver como procesos de selección. Además, este trabajo no abarca la posible implementación biológica de estas redes, a pesar de ser de gran importancia. A lo largo de esta tesis se ha tomado como definición de la medida de complejidad para los ANSP, una que denotaremos como tama~no (considerando tama~no como el número de nodos del grafo subyacente). Se ha mostrado que cualquier lenguaje enumerable recursivo L puede ser aceptado por un ANSP en el cual el número de procesadores está linealmente acotado por la cardinalidad del alfabeto de la cinta de una máquina de Turing que reconoce dicho lenguaje L. Siguiendo el concepto de ANSP universales introducido por Manea [65], se ha demostrado que un ANSP con una estructura de grafo fija puede aceptar cualquier lenguaje enumerable recursivo. Un ANSP se puede considerar como un ente capaz de resolver problemas, además de tener otra propiedad relevante desde el punto de vista práctico: Se puede definir un ANSP universal como una subred, donde solo una cantidad limitada de parámetros es dependiente del lenguaje. La anterior característica se puede interpretar como un método para resolver cualquier problema NP en tiempo polinomial empleando un ANSP de tama~no constante, concretamente treinta y uno. Esto significa que la solución de cualquier problema NP es uniforme en el sentido de que la red, exceptuando la subred universal, se puede ver como un programa; adaptándolo a la instancia del problema a resolver, se escogerín los filtros y las reglas que no pertenecen a la subred universal. Un problema interesante desde nuestro punto de vista es el que hace referencia a como elegir el tama~no optimo de esta red.---ABSTRACT---This thesis deals with the recent research works in the area of Natural Computing {bio-inspired models{, more precisely Networks of Evolutionary Processors first developed by Victor Mitrana and they are based on P Systems whose father is Georghe Paun. In these models, they are a set of processors connected in an underlying undirected graph, such processors have an object multiset (strings) and a set of rules, named evolution rules, that transform objects inside processors[55, 53],. These objects can be sent/received using graph connections provided they accomplish constraints defined at input and output filters processors have. This symbolic model, non deterministic one (processors are not synchronized) and massive parallel one[55] (all rules can be applied in one computational step) has some important properties regarding solution of NP-problems in lineal time and of course, lineal resources. There are a great number of variants such as hybrid networks, splicing processors, etc. that provide the model a computational power equivalent to Turing machines. The origin of networks of evolutionary processors (NEP for short) is a basic architecture for parallel and distributed symbolic processing, related to the Connection Machine as well as the Logic Flow paradigm, which consists of several processors, each of them being placed in a node of a virtual complete graph, which are able to handle data associated with the respective node. All the nodes send simultaneously their data and the receiving nodes handle also simultaneously all the arriving messages, according to some strategies. In a series of papers one considers that each node may be viewed as a cell having genetic information encoded in DNA sequences which may evolve by local evolutionary events, that is point mutations. Each node is specialized just for one of these evolutionary operations. Furthermore, the data in each node is organized in the form of multisets of words (each word appears in an arbitrarily large number of copies), and all the copies are processed in parallel such that all the possible events that can take place do actually take place. Obviously, the computational process just described is not exactly an evolutionary process in the Darwinian sense. But the rewriting operations we have considered might be interpreted as mutations and the filtering process might be viewed as a selection process. Recombination is missing but it was asserted that evolutionary and functional relationships between genes can be captured by taking only local mutations into consideration. It is clear that filters associated with each node allow a strong control of the computation. Indeed, every node has an input and output filter; two nodes can exchange data if it passes the output filter of the sender and the input filter of the receiver. Moreover, if some data is sent out by some node and not able to enter any node, then it is lost. In this paper we simplify the ANSP model considered in by moving the filters from the nodes to the edges. Each edge is viewed as a two-way channel such that the input and output filters coincide. Clearly, the possibility of controlling the computation in such networks seems to be diminished. For instance, there is no possibility to loose data during the communication steps. In spite of this and of the fact that splicing is not a powerful operation (remember that splicing systems generates only regular languages) we prove here that these devices are computationally complete. As a consequence, we propose characterizations of two complexity classes, namely NP and PSPACE, in terms of accepting networks of restricted splicing processors with filtered connections. We proposed a uniform linear time solution to SAT based on ANSPFCs with linearly bounded resources. This solution should be understood correctly: we do not solve SAT in linear time and space. Since any word and auxiliary word appears in an arbitrarily large number of copies, one can generate in linear time, by parallelism and communication, an exponential number of words each of them having an exponential number of copies. However, this does not seem to be a major drawback since by PCR (Polymerase Chain Reaction) one can generate an exponential number of identical DNA molecules in a linear number of reactions. It is worth mentioning that the ANSPFC constructed above remains unchanged for any instance with the same number of variables. Therefore, the solution is uniform in the sense that the network, excepting the input and output nodes, may be viewed as a program according to the number of variables, we choose the filters, the splicing words and the rules, then we assign all possible values to the variables, and compute the formula.We proved that ANSP are computationally complete. Do the ANSPFC remain still computationally complete? If this is not the case, what other problems can be eficiently solved by these ANSPFCs? Moreover, the complexity class NP is exactly the class of all languages decided by ANSP in polynomial time. Can NP be characterized in a similar way with ANSPFCs?
Resumo:
Partiendo de la idea de la concurrencia de un fenómeno y de las acciones que se deben llevar a cabo, se debe entender la Gestión de Riesgos como una determinación de relaciones entre lo que se asume como vulnerable, y la forma en la cual se estimará la probabilidad de un determinado hecho. A medida en que pasa el tiempo, se torna más importante el tema de vulnerabilidad y de riesgo a nivel mundial. La vulnerabilidad se vuelve más notoria ante la presencia de determinados peligros naturales como lo son los siguientes: deslizamientos de tierras, movimientos sísmicos, desbordes de ríos e inundaciones. (Pérez Soriano, 2014) La vulnerabilidad se ve en su mayoría más afectada por acciones antrópicas como lo son la construcción de viviendas en lugares de alto riesgo, como por ejemplo, viviendas en los taludes de obras lineales. Estas viviendas están condicionadas por la localización, el uso del suelo, la infraestructura, la distribución y la densidad de la población, su capacidad de organización, etc. La Gestión de Riesgos se vuelve cada vez más exigente en cuanto a la calidad de los servicios ofertados, además de también, cumplir con la responsabilidad jurídica que implica la concepción, diseño y construcción de taludes y terraplenes en obras lineales. (Pérez Soriano, 2014) El presente trabajo de investigación se centra en la identificación y valoración de los riesgos de taludes y terraplenes en obras lineales de la República Dominicana. De esta manera se busca disminuir el riesgo de la rotura o fallo de un talud o terraplén, y el número de víctimas que puedan ser afectadas por el mismo, concluyendo con un catálogo de riesgos para taludes y terraplenes en obras lineales. Based on the idea of the occurrence of a phenomenon and the actions to be carried out, Risk Management should be understood as a determination of relationships between what is assumed to be vulnerable, and the way in which the probability of a particular event will be estimated. The issue of vulnerability and risk becomes more important worldwide as time goes on. Vulnerability becomes more evident in the presence of certain natural hazards such as: landslides, earthquakes, overflowing rivers and flooding. (Pérez Soriano, 2014) It has become evident that vulnerability is mostly affected by human actions, such as the construction of housing in high-risk locations, for example, man-made slopes. These properties are conditioned by the location, land use, infrastructure, distribution and density of the population, organizational skills, etc. Risk management is becoming more discerning about the quality of the services offered, in addition to the compliance of the legal responsibility that the conception, design and construction of slopes and embankments in linear works require. (Pérez Soriano, 2014) This research project focuses on the identification and valuation of the risks derived from slopes and embankments in linear works of the Dominican Republic, in order to reduce the risks of failure of a slope or embankment and the number of victims who may be affected from it. Concluding with a risk catalogue for slopes and embankments in linear works.
Resumo:
Arch bridge structural solution has been known for centuries, in fact the simple nature of arch that require low tension and shear strength was an advantage as the simple materials like stone and brick were the only option back in ancient centuries. By the pass of time especially after industrial revolution, the new materials were adopted in construction of arch bridges to reach longer spans. Nowadays one long span arch bridge is made of steel, concrete or combination of these two as "CFST", as the result of using these high strength materials, very long spans can be achieved. The current record for longest arch belongs to Chaotianmen bridge over Yangtze river in China with 552 meters span made of steel and the longest reinforced concrete type is Wanxian bridge which also cross the Yangtze river through a 420 meters span. Today the designer is no longer limited by span length as long as arch bridge is the most applicable solution among other approaches, i.e. cable stayed and suspended bridges are more reasonable if very long span is desired. Like any super structure, the economical and architectural aspects in construction of a bridge is extremely important, in other words, as a narrower bridge has better appearance, it also require smaller volume of material which make the design more economical. Design of such bridge, beside the high strength materials, requires precise structural analysis approaches capable of integrating the combination of material behaviour and complex geometry of structure and various types of loads which may be applied to bridge during its service life. Depend on the design strategy, analysis may only evaluates the linear elastic behaviour of structure or consider the nonlinear properties as well. Although most of structures in the past were designed to act in their elastic range, the rapid increase in computational capacity allow us to consider different sources of nonlinearities in order to achieve a more realistic evaluations where the dynamic behaviour of bridge is important especially in seismic zones where large movements may occur or structure experience P - _ effect during the earthquake. The above mentioned type of analysis is computationally expensive and very time consuming. In recent years, several methods were proposed in order to resolve this problem. Discussion of recent developments on these methods and their application on long span concrete arch bridges is the main goal of this research. Accordingly available long span concrete arch bridges have been studied to gather the critical information about their geometrical aspects and properties of their materials. Based on concluded information, several concrete arch bridges were designed for further studies. The main span of these bridges range from 100 to 400 meters. The Structural analysis methods implemented in in this study are as following: Elastic Analysis: Direct Response History Analysis (DRHA): This method solves the direct equation of motion over time history of applied acceleration or imposed load in linear elastic range. Modal Response History Analysis (MRHA): Similar to DRHA, this method is also based on time history, but the equation of motion is simplified to single degree of freedom system and calculates the response of each mode independently. Performing this analysis require less time than DRHA. Modal Response Spectrum Analysis (MRSA): As it is obvious from its name, this method calculates the peak response of structure for each mode and combine them using modal combination rules based on the introduced spectra of ground motion. This method is expected to be fastest among Elastic analysis. Inelastic Analysis: Nonlinear Response History Analysis (NL-RHA): The most accurate strategy to address significant nonlinearities in structural dynamics is undoubtedly the nonlinear response history analysis which is similar to DRHA but extended to inelastic range by updating the stiffness matrix for every iteration. This onerous task, clearly increase the computational cost especially for unsymmetrical buildings that requires to be analyzed in a full 3D model for taking the torsional effects in to consideration. Modal Pushover Analysis (MPA): The Modal Pushover Analysis is basically the MRHA but extended to inelastic stage. After all, the MRHA cannot solve the system of dynamics because the resisting force fs(u; u_ ) is unknown for inelastic stage. The solution of MPA for this obstacle is using the previously recorded fs to evaluate system of dynamics. Extended Modal Pushover Analysis (EMPA): Expanded Modal pushover is a one of very recent proposed methods which evaluates response of structure under multi-directional excitation using the modal pushover analysis strategy. In one specific mode,the original pushover neglect the contribution of the directions different than characteristic one, this is reasonable in regular symmetric building but a structure with complex shape like long span arch bridges may go through strong modal coupling. This method intend to consider modal coupling while it take same time of computation as MPA. Coupled Nonlinear Static Pushover Analysis (CNSP): The EMPA includes the contribution of non-characteristic direction to the formal MPA procedure. However the static pushovers in EMPA are performed individually for every mode, accordingly the resulted values from different modes can be combined but this is only valid in elastic phase; as soon as any element in structure starts yielding the neutral axis of that section is no longer fixed for both response during the earthquake, meaning the longitudinal deflection unavoidably affect the transverse one or vice versa. To overcome this drawback, the CNSP suggests executing pushover analysis for governing modes of each direction at the same time. This strategy is estimated to be more accurate than MPA and EMPA, moreover the calculation time is reduced because only one pushover analysis is required. Regardless of the strategy, the accuracy of structural analysis is highly dependent on modelling and numerical integration approaches used in evaluation of each method. Therefore the widely used Finite Element Method is implemented in process of all analysis performed in this research. In order to address the study, chapter 2, starts with gathered information about constructed long span arch bridges, this chapter continuous with geometrical and material definition of new models. Chapter 3 provides the detailed information about structural analysis strategies; furthermore the step by step description of procedure of all methods is available in Appendix A. The document ends with the description of results and conclusion of chapter 4.
Resumo:
Vector reconstruction of objects from an unstructured point cloud obtained with a LiDAR-based system (light detection and ranging) is one of the most promising methods to build three dimensional models of orchards. The cylinder fitting method for woody structure reconstruction of leafless trees from point clouds obtained with a mobile terrestrial laser scanner (MTLS) has been analysed. The advantage of this method is that it performs reconstruction in a single step. The most time consuming part of the algorithm is generation of the cylinder direction, which must be recalculated at the inclusion of each point in the cylinder. The tree skeleton is obtained at the same time as the cluster of cylinders is formed. The method does not guarantee a unique convergence and the reconstruction parameter values must be carefully chosen. A balanced processing of clusters has also been defined which has proven to be very efficient in terms of processing time by following the hierarchy of branches, predecessors and successors. The algorithm was applied to simulated MTLS of virtual orchard models and to MTLS data of real orchards. The constraints applied in the method have been reviewed to ensure better convergence and simpler use of parameters. The results obtained show a correct reconstruction of the woody structure of the trees and the algorithm runs in linear logarithmic time
Resumo:
The 1,3–1,4-β-glucanase from Bacillus macerans (wtGLU) and the 1,4-β-xylanase from Bacillus subtilis (wtXYN) are both single-domain jellyroll proteins catalyzing similar enzymatic reactions. In the fusion protein GluXyn-1, the two proteins are joined by insertion of the entire XYN domain into a surface loop of cpMAC-57, a circularly permuted variant of wtGLU. GluXyn-1 was generated by protein engineering methods, produced in Escherichia coli and shown to fold spontaneously and have both enzymatic activities at wild-type level. The crystal structure of GluXyn-1 was determined at 2.1 Å resolution and refined to R = 17.7% and R(free) = 22.4%. It shows nearly ideal, native-like folding of both protein domains and a small, but significant hinge bending between the domains. The active sites are independent and accessible explaining the observed enzymatic activity. Because in GluXyn-1 the complete XYN domain is inserted into the compact folding unit of GLU, the wild-type-like activity and tertiary structure of the latter proves that the folding process of GLU does not depend on intramolecular interactions that are short-ranged in the sequence. Insertion fusions of the GluXyn-1 type may prove to be an easy route toward more stable bifunctional proteins in which the two parts are more closely associated than in linear end-to-end protein fusions.
Resumo:
The brain can hold the eyes still because it stores a memory of eye position. The brain’s memory of horizontal eye position appears to be represented by persistent neural activity in a network known as the neural integrator, which is localized in the brainstem and cerebellum. Existing experimental data are reinterpreted as evidence for an “attractor hypothesis” that the persistent patterns of activity observed in this network form an attractive line of fixed points in its state space. Line attractor dynamics can be produced in linear or nonlinear neural networks by learning mechanisms that precisely tune positive feedback.
Resumo:
The visual responses of neurons in the cerebral cortex were first adequately characterized in the 1960s by D. H. Hubel and T. N. Wiesel [(1962) J. Physiol. (London) 160, 106-154; (1968) J. Physiol. (London) 195, 215-243] using qualitative analyses based on simple geometric visual targets. Over the past 30 years, it has become common to consider the properties of these neurons by attempting to make formal descriptions of these transformations they execute on the visual image. Most such models have their roots in linear-systems approaches pioneered in the retina by C. Enroth-Cugell and J. R. Robson [(1966) J. Physiol. (London) 187, 517-552], but it is clear that purely linear models of cortical neurons are inadequate. We present two related models: one designed to account for the responses of simple cells in primary visual cortex (V1) and one designed to account for the responses of pattern direction selective cells in MT (or V5), an extrastriate visual area thought to be involved in the analysis of visual motion. These models share a common structure that operates in the same way on different kinds of input, and instantiate the widely held view that computational strategies are similar throughout the cerebral cortex. Implementations of these models for Macintosh microcomputers are available and can be used to explore the models' properties.
Resumo:
INTRODUÇÃO: As doenças cardiovasculares (DCV) são a principal causa de morte no mundo, sendo muitos dos fatores de risco passíveis de prevenção e controle. Embora as DCV sejam complexas em sua etiologia e desenvolvimento, a concentração elevada de LDL-c e baixa de HDL-c constituem os fatores de risco modificáveis mais monitorados na prática clínica, embora não sejam capazes de explicar todos os eventos cardiovasculares. Portanto, investigar como intervenções farmacológicas e nutricionais podem modular parâmetros oxidativos, físicos e estruturais das lipoproteínas pode fornecer estimativa adicional ao risco cardiovascular. Dentre os diversos nutrientes e compostos bioativos relacionados às DCV, os lipídeos representam os mais investigados e descritos na literatura. Nesse contexto, os ácidos graxos insaturados (ômega-3, ômega-6 e ômega-9) têm sido foco de inúmeros estudos. OBJETIVOS: Avaliar o efeito da suplementação com ômega-3, ômega-6 e ômega-9 sobre os parâmetros cardiometabólicos em indivíduos adultos com múltiplos fatores de risco e sem evento cardiovascular prévio. MATERIAL E MÉTODOS: Estudo clínico, randomizado, duplo-cego, baseado em intervenção nutricional (3,0 g/dia de ácidos graxos) sob a fórmula de cápsulas contendo: ômega-3 (37 por cento de EPA e 23 por cento de DHA) ou ômega-6 (65 por cento de ácido linoleico) ou ômega-9 (72 por cento de ácido oleico). A amostra foi composta por indivíduos de ambos os sexos, com idade entre 30 e 74 anos, apresentando pelo menos um dos seguintes fatores de risco: Dislipidemia, Diabetes Mellitus, Obesidade e Hipertensão Arterial Sistêmica. Após aprovação do Comitê de Ética, os indivíduos foram distribuídos nos três grupos de intervenção. No momento basal, os indivíduos foram caracterizados quanto aos aspectos demográficos (sexo, idade e etnia) e clínicos (medicamentos, doenças atuais e antecedentes familiares). Nos momentos basal e após 8 semanas de intervenção, amostras de sangue foram coletadas após 12h de jejum. A partir do plasma foram analisados: perfil lipídico (CT, LDL-c, HDL-c, TG), apolipoproteínas AI e B, ácidos graxos não esterificados, atividade da PON1, LDL(-) e auto-anticorpos, ácidos graxos, glicose, insulina, tamanho e distribuição percentual da LDL (7 subfrações e fenótipo A e não-A) e HDL (10 subfrações). O efeito do tempo, da intervenção e associações entre os ácidos graxos e aspectos qualitativos das lipoproteínas foram testados (SPSS versão 20.0, p <0,05). RESULTADOS: Uma primeira análise dos resultados baseada em um corte transversal demonstrou, por meio da análise de tendência linear ajustada pelo nível de risco cardiovascular, que o maior tercil plasmático de DHA se associou positivamente com HDL-c, HDLGRANDE e tamanho de LDL e negativamente com HDLPEQUENA e TG. Observou-se também que o maior tercil plasmático de ácido linoleico se associou positivamente com HDLGRANDE e tamanho de LDL e negativamente com HDLPEQUENA e TG. Esse perfil de associação não foi observado quando foram avaliados os parâmetros dietéticos. Avaliando uma subamostra que incluiu indivíduos tabagistas suplementados com ômega-6 e ômega-3, observou-se que ômega-3 modificou positivamente o perfil lipídico e as subfrações da HDL. Nos modelos de regressão linear ajustados pela idade, sexo e hipertensão, o DHA plasmático apresentou associações negativas com a HDLPEQUENA. Quando se avaliou exclusivamente o efeito do ômega-3 em indivíduos tabagistas e não tabagistas, observou-se que fumantes, do sexo masculino, acima de 60 anos de idade, apresentando baixo percentual plasmático de EPA e DHA (<8 por cento ), com excesso de peso e gordura corporal elevada, apresentam maior probabilidade de ter um perfil de subfrações de HDL mais aterogênicas. Tendo por base os resultados acima, foi comparado o efeito do ômega-3, ômega-6 e ômega-9 sobre os parâmetros cardiometabólicos. O ômega-3 promoveu redução no TG, aumento do percentual de HDLGRANDE e redução de HDLPEQUENA. O papel cardioprotetor do ômega-3 foi reforçado pelo aumento na incorporação de EPA e DHA, no qual indivíduos com EPA e DHA acima de 8 por cento apresentaram maior probabilidade de ter HDLGRANDE e menor de ter HDLPEQUENA. Em adição, observou-se também que o elevado percentual plasmático de ômega-9 se associou com partículas de LDL menos aterogênicas (fenótipo A). CONCLUSÃO: Ácidos graxos plasmáticos, mas não dietéticos, se correlacionam com parâmetros cardiometabólicos. A suplementação com ômega-3, presente no óleo de peixe, promoveu redução no TG e melhoria nos parâmetros qualitativos da HDL (mais HDLGRANDE e menos HDLPEQUENA). Os benefícios do ômega-3 foram particularmente relevantes nos indivíduos tabagistas e naqueles com menor conteúdo basal de EPA e DHA plasmáticos. Observou-se ainda que o ômega-9 plasmático, presente no azeite de oliva, exerceu impacto positivo no tamanho e subfrações da LDL.
Resumo:
Porous carbon and carbide materials with different structures were characterized using adsorption of nitrogen at 77.4 K before and after preadsorption of n-nonane. The selective blocking of the microporosity with n-nonane shows that ordered mesoporous silicon carbide material (OM-SiC) is almost exclusively mesoporous whereas the ordered mesoporous carbon CMK-3 contains a significant amount of micropores (25%). The insertion of micropores into OM-SiC using selective extraction of silicon by hot chlorine gas leads to the formation of ordered mesoporous carbide-derived carbon (OM-CDC) with a hierarchical pore structure and significantly higher micropore volume as compared to CMK-3, whereas a CDC material from a nonporous precursor is exclusively microporous. Volumes of narrow micropores, calculated by adsorption of carbon dioxide at 273 K, are in linear correlation with the volumes blocked by n-nonane. Argon adsorption measurements at 87.3 K allow for precise and reliable calculation of the pore size distribution of the materials using density functional theory (DFT) methods.
Resumo:
El objetivo del estudio es examinar qué tipo de aprehensiones cognitivas utilizan alumnos de Educación Primaria cuando resuelven problemas de generalización lineal. 81 alumnos de 5º y 6º curso resolvieron dos problemas de generalización lineal que diferían en la configuración de la sucesión de figuras dadas (mesas cuadradas o mesas en forma de trapecio). Los resultados indican que la configuración de la sucesión de figuras condiciona el tipo de aprehensión utilizada por los alumnos; en algunos casos tienen dificultades para cambiar de aprehensión.
Resumo:
Investigation of the Middle Miocene-Pleistocene succession in cores at ODP Site 817A (Leg 133), drilled on the slope south of the Queensland Plateau, identified the various material fluxes contributing to sedimentation and has determined thereby the paleogeographic events which occurred close to the studied area and influenced these fluxes. To determine proportions of platform origin and of plankton origin of carbonate mud, two reference sediments were collected: (1) back-reef carbonate mud from the Young Reef area (Great Barrier Reef); and (2) Late Miocene chalk from the Loyalty Basin, off New Caledonia. Through their biofacies and mineralogical and geochemical characters, these reference sediments were used to distinguish the proportions of platform and basin components in carbonate muds of 25 core samples from Hole 817A. Two "origin indexes" (i1 and i2) relate the proportion in platform and basin materials. The relative sedimentation rate is inferred from the high-frequency cycles determined by redox intervals in the cores. Bulk carbonate deposited in each core has been calculated in two ways with close results: (1) from calcimetric data available in the Leg 133 preliminary reports (Davies et al., 1991); and (2) from average magnetic susceptibility of cores, a value negatively correlated to the average carbonate content. Vertical changes in sedimentation rates, in carbonate content, in origin indexes and in "linear fluxes" document the evolution of sediment origins from platform carbonates, planktonic carbonates and insoluble material through time. These data are augmented with the variations in organic-matter content through the 817A succession. The observed changes and their interpretation are not modified by compaction, and are compatible with major paleogeographic events including drowning of the Queensland Plateau (Middle Miocene-Early Pliocene) and the renewal of shallow carbonate production, (1) during the Late Pliocene, and (2) from the Early Pleistocene. The birth and growth of the Great Barrier Reef is also recorded from 0.5 Ma by a strengthening of detrital carbonate deposition and possibly by a lack of clay minerals in the 4 upper cores, a response to trapping of terrigenous material behind this barrier. In addition, a maximum of biological silica production is displayed in the Middle Miocene. These changes constrain the time of events and the sequence-stratigraphy framework some components of which are transgression surface, maximum flooding surface and low-stand turbidites. Sedimentation rates and material fluxes show cycles lasting 1.75 Myr. Whatever their origin (climatic and/or eustatic) these cycles affected the planktonic production primarily. The changes also show that major carbonate variations in the deposits are due to a dilution effect by insoluble material (clay, biogenic silica and volcanic glasses) and that plankton productivity, controlling the major fraction of carbonate sedimentation, depends principally on terrigenous supplies, but also on deep-water upwelling. Accuracy of the method is reduced by redeposition, reworking, and probable occurrence of hiatuses.
Resumo:
The proteome of bovine milk is dominated by just six gene products that constitute approximately 95% of milk protein. Nonetheless, over 150 protein spots can be readily detected following two-dimensional electrophoresis of whole milk. Many of these represent isoforms of the major gene products produced through extensive posttranslational modification. Peptide mass fingerprinting of in-gel tryptic digests (using matrix-assisted laser desorption/ionization-time of flight mass spectrometry (MALDI-TOF MS) in reflectron mode with alpha-cyano-4-hydroxycinnamic acid as the matrix) identified 10 forms of K-casein with isoelectric point (pl) values from 4.47 to 5.81, but could not distinguish between them. MALDI-TOF MS in linear mode, using sinapinic acid as the matrix, revealed a large tryptic peptide (mass > 5990 Da) derived from the C-terminus that contained all the known sites of genetic variance, phosphorylation and glycosylation. Two genetic variants present as singly or doubly phosphorylated forms could be distinguished using mass data alone. Glycoforms containing a single acidic tetrasaccharide were also identified. The differences in electrophoretic mobility of these isoforms were consistent with the addition of the acidic groups. While more extensively glycosylated forms were also observed, substantial loss of N-acetylneuraminic acid from the glycosyl group was evident in the MALDI spectra such that ions corresponding to the intact glycopeptide were not observed and assignment of the glycoforms was not possible. However, by analysing the pl shifts observed on the two-dimensional gels in conjunction with the MS data, the number of N-acetylneuraminic acid residues, and hence the glycoforms present, could be determined.
Resumo:
Photonic quantum-information processing schemes, such as linear optics quantum computing, and other experiments relying on single-photon interference, inherently require complete photon indistinguishability to enable the desired photonic interactions to take place. Mode-mismatch is the dominant cause of photon distinguishability in optical circuits. Here we study the effects of photon wave-packet shape on tolerance against the effects of mode mismatch in linear optical circuits, and show that Gaussian distributed photons with large bandwidth are optimal. The result is general and holds for arbitrary linear optical circuits, including ones which allow for postselection and classical feed forward. Our findings indicate that some single photon sources, frequently cited for their potential application to quantum-information processing, may in fact be suboptimal for such applications.
Resumo:
Mating preferences are common in natural populations, and their divergence among populations is considered an important source of reproductive isolation during speciation. Although mechanisms for the divergence of mating preferences have received substantial theoretical treatment, complementary experimental tests are lacking. We conducted a laboratory evolution experiment, using the fruit fly Drosophila serrata, to explore the role of divergent selection between environments in the evolution of female mating preferences. Replicate populations of D. serrata were derived from a common ancestor and propagated in one of three resource environments: two novel environments and the ancestral laboratory environment. Adaptation to both novel environments involved changes in cuticular hydrocarbons, traits that predict mating success in these populations. Furthermore, female mating preferences for these cuticular hydrocarbons also diverged among populations. A component of this divergence occurred among treatment environments, accounting for at least 17.4% of the among- population divergence in linear mating preferences and 17.2% of the among-population divergence in nonlinear mating preferences. The divergence of mating preferences in correlation with environment is consistent with the classic by- product model of speciation in which premating isolation evolves as a side effect of divergent selection adapting populations to their different environments.
Resumo:
Introductory courses covering modem physics sometimes introduce some elementary ideas from general relativity, though the idea of a geodesic is generally limited to shortest Euclidean length on a curved surface of two spatial dimensions rather than extremal aging in spacetime. It is shown that Epstein charts provide a simple geometric picture of geodesics in one space and one time dimension and that for a hypothetical uniform gravitational field, geodesics are straight lines on a planar diagram. This means that the properties of geodesics in a uniform field can be calculated with only a knowledge of elementary geometry and trigonometry, thus making the calculation of some basic results of general relativity accessible to students even in an algebra-based survey course on physics.