911 resultados para top-down approach
Resumo:
Numéro spécial: Translational Nanomedicine
Resumo:
This work presents the modeling and FPGA implementation of digital TIADC mismatches compensation systems. The development of the whole work follows a top-down methodology. Following this methodology was developed a two channel TIADC behavior modeling and their respective offset, gain and clock skew mismatches on Simulink. In addition was developed digital mismatch compensation system behavior modeling. For clock skew mismatch compensation fractional delay filters were used, more specifically, the efficient Farrow struct. The definition of wich filter design methodology would be used, and wich Farrow structure, required the study of various design methods presented in literature. The digital compensation systems models were converted to VHDL, for FPGA implementation and validation. These system validation was carried out using the test methodology FPGA In Loop . The results obtained with TIADC mismatch compensators show the high performance gain provided by these structures. Beyond this result, these work illustrates the potential of design, implementation and FPGA test methodologies.
Resumo:
Existen métodos clásicos de diseño de bases de datos relacionales -descomposición y síntesis- El objetivo de esas aproximaciones clásicas es alcanzar el más alto nivel de normalización. El método de diseño por descomposición es “top-down” que comienza con una relación existente, investiga su forma normal y la descompone vía proyecciones hasta que el esquema relacional adquiera el grado de normalización deseado.
Resumo:
The work outlined in this dissertation will allow biochemists and cellular biologists to characterize polyubiquitin chains involved in their cellular environment by following a facile mass spectrometric based workflow. The characterization of polyubiquitin chains has been of interest since their discovery in 1984. The profound effects of ubiquitination on the movement and processing of cellular proteins depend exclusively on the structures of mono and polyubiquitin modifications anchored or unanchored on the protein within the cellular environment. However, structure-function studies have been hindered by the difficulty in identifying complex chain structures due to limited instrument capabilities of the past. Genetic mutations or reiterative immunoprecipitations have been used previously to characterize the polyubiquitin chains, but their tedium makes it difficult to study a broad ubiquitinome. Top-down and middle-out mass spectral based proteomic studies have been reported for polyubiquitin and have had success in characterizing parts of the chain, but no method to date has been successful at differentiating all theoretical ubiquitin chain isomers (ubiquitin chain lengths from dimer to tetramer alone have 1340 possible isomers). The workflow presented here can identify chain length, topology and linkages present using a chromatographic-time-scale compatible, LC-MS/MS based workflow. To accomplish this feat, the strategy had to exploit the most recent advances in top-down mass spectrometry. This included the most advanced electron transfer dissociation (ETD) activation and sensitivity for large masses from the orbitrap Fusion Lumos. The spectral interpretation had to be done manually with the aid of a graphical interface to assign mass shifts because of a lack of software capable to interpret fragmentation across isopeptide linkages. However, the method outlined can be applied to any mass spectral based system granted it results in extensive fragmentation across the polyubiquitin chain; making this method adaptable to future advances in the field.
Resumo:
La vision joue un rôle très important dans la prévention du danger. La douleur a aussi pour fonction de prévenir les lésions corporelles. Nous avons donc testé l’hypothèse qu’une hypersensibilité à la douleur découlerait de la cécité en guise de compensation sensorielle. En effet, une littérature exhaustive indique qu’une plasticité intermodale s’opère chez les non-voyants, ce qui module à la hausse la sensibilité de leurs sens résiduels. De plus, plusieurs études montrent que la douleur peut être modulée par la vision et par une privation visuelle temporaire. Dans une première étude, nous avons mesuré les seuils de détection thermique et les seuils de douleur chez des aveugles de naissance et des voyants à l’aide d’une thermode qui permet de chauffer ou de refroidir la peau. Les participants ont aussi eu à quantifier la douleur perçue en réponse à des stimuli laser CO2 et à répondre à des questionnaires mesurant leur attitude face à des situations douloureuses de la vie quotidienne. Les résultats obtenus montrent que les aveugles congénitaux ont des seuils de douleur plus bas et des rapports de douleur plus élevés que leurs congénères voyants. De plus, les résultats psychométriques indiquent que les non-voyants sont plus attentifs à la douleur. Dans une deuxième étude, nous avons mesuré l’impact de l'expérience visuelle sur la perception de la douleur en répliquant la première étude dans un échantillon d’aveugles tardifs. Les résultats montrent que ces derniers sont en tous points similaires aux voyants quant à leur sensibilité à la douleur. Dans une troisième étude, nous avons testé les capacités de discrimination de température des aveugles congénitaux, car la détection de changements rapides de température est cruciale pour éviter les brûlures. Il s’est avéré que les aveugles de naissance ont une discrimination de température plus fine et qu’ils sont plus sensibles à la sommation spatiale de la chaleur. Dans une quatrième étude, nous avons examiné la contribution des fibres A∂ et C au traitement nociceptif des non-voyants, car ces récepteurs signalent la première et la deuxième douleur, respectivement. Nous avons observé que les aveugles congénitaux détectent plus facilement et répondent plus rapidement aux sensations générées par l’activation des fibres C. Dans une cinquième et dernière étude, nous avons sondé les changements potentiels qu’entrainerait la perte de vision dans la modulation descendante des intrants nociceptifs en mesurant les effets de l’appréhension d’un stimulus nocif sur la perception de la douleur. Les résultats montrent que, contrairement aux voyants, les aveugles congénitaux voient leur douleur exacerbée par l’incertitude face au danger, suggérant ainsi que la modulation centrale de la douleur est facilitée chez ces derniers. En gros, ces travaux indiquent que l’absence d’expérience visuelle, plutôt que la cécité, entraine une hausse de la sensibilité nociceptive, ce qui apporte une autre dimension au modèle d’intégration multi-sensorielle de la vision et de la douleur.
Resumo:
Humans use their grammatical knowledge in more than one way. On one hand, they use it to understand what others say. On the other hand, they use it to say what they want to convey to others (or to themselves). In either case, they need to assemble the structure of sentences in a systematic fashion, in accordance with the grammar of their language. Despite the fact that the structures that comprehenders and speakers assemble are systematic in an identical fashion (i.e., obey the same grammatical constraints), the two ‘modes’ of assembling sentence structures might or might not be performed by the same cognitive mechanisms. Currently, the field of psycholinguistics implicitly adopts the position that they are supported by different cognitive mechanisms, as evident from the fact that most psycholinguistic models seek to explain either comprehension or production phenomena. The potential existence of two independent cognitive systems underlying linguistic performance doubles the problem of linking the theory of linguistic knowledge and the theory of linguistic performance, making the integration of linguistics and psycholinguistic harder. This thesis thus aims to unify the structure building system in comprehension, i.e., parser, and the structure building system in production, i.e., generator, into one, so that the linking theory between knowledge and performance can also be unified into one. I will discuss and unify both existing and new data pertaining to how structures are assembled in understanding and speaking, and attempt to show that the unification between parsing and generation is at least a plausible research enterprise. In Chapter 1, I will discuss the previous and current views on how parsing and generation are related to each other. I will outline the challenges for the current view that the parser and the generator are the same cognitive mechanism. This single system view is discussed and evaluated in the rest of the chapters. In Chapter 2, I will present new experimental evidence suggesting that the grain size of the pre-compiled structural units (henceforth simply structural units) is rather small, contrary to some models of sentence production. In particular, I will show that the internal structure of the verb phrase in a ditransitive sentence (e.g., The chef is donating the book to the monk) is not specified at the onset of speech, but is specified before the first internal argument (the book) needs to be uttered. I will also show that this timing of structural processes with respect to the verb phrase structure is earlier than the lexical processes of verb internal arguments. These two results in concert show that the size of structure building units in sentence production is rather small, contrary to some models of sentence production, yet structural processes still precede lexical processes. I argue that this view of generation resembles the widely accepted model of parsing that utilizes both top-down and bottom-up structure building procedures. In Chapter 3, I will present new experimental evidence suggesting that the structural representation strongly constrains the subsequent lexical processes. In particular, I will show that conceptually similar lexical items interfere with each other only when they share the same syntactic category in sentence production. The mechanism that I call syntactic gating, will be proposed, and this mechanism characterizes how the structural and lexical processes interact in generation. I will present two Event Related Potential (ERP) experiments that show that the lexical retrieval in (predictive) comprehension is also constrained by syntactic categories. I will argue that the syntactic gating mechanism is operative both in parsing and generation, and that the interaction between structural and lexical processes in both parsing and generation can be characterized in the same fashion. In Chapter 4, I will present a series of experiments examining the timing at which verbs’ lexical representations are planned in sentence production. It will be shown that verbs are planned before the articulation of their internal arguments, regardless of the target language (Japanese or English) and regardless of the sentence type (active object-initial sentence in Japanese, passive sentences in English, and unaccusative sentences in English). I will discuss how this result sheds light on the notion of incrementality in generation. In Chapter 5, I will synthesize the experimental findings presented in this thesis and in previous research to address the challenges to the single system view I outlined in Chapter 1. I will then conclude by presenting a preliminary single system model that can potentially capture both the key sentence comprehension and sentence production data without assuming distinct mechanisms for each.
Resumo:
This thesis pursuits to contextualize the theoretical debate between the implementation of public education policy of the Federal Government focused in a distance learning and legal foundations for its enforcement, in order to raise questions and comments on the topic in question. Its importance is back to provide scientific input and can offer to the academy, particularly in the UFRN, and elements of society to question and rethink the complex relationship between the socio-economic and geographic access to higher education. It consists of a descriptive study on the institutionalization of distance education in UFRN as a mechanism for expanding access to higher education, for both, the research seeks to understand if the distance undergraduate courses offered by the UAB system and implemented at UFRN, promote expanding access to higher education, as it is during implementation that the rules, routines and social processes are converted from intentions to action. The discussion of this study lasted between two opposing views of Implementation models: Top-down and Bottom-up. It is worth noting that the documents PNE, PDE and programs and UAB MEETING reflect positively in improving the educational level of the population of the country It is a qualitative study, using the means Bibliographic, Document and Field Study, where they were performed 04 (four) in 2010 interviews with the management framework SEDIS / UAB in UFRN. The data were analyzed and addressed through techniques: Document Analysis and Content Analysis. The results show that the process of implementation of distance education at UFRN is in progress. According to our results, the research objective is achieved, but there was a need to rethink the conditions of the infrastructure of poles, the structure of the academic calendar, the management of the SEDIS UFRN, regarding the expansion of existing vacancies and the supply of new courses by the need for a redesign as the Secretariat's ability to hold the offerings of undergraduate courses offered by the Federal Government to be implemented in the institution. It was also found that levels of evasion still presents a challenge to the teaching model. Given the context, we concluded that the greatest contribution of UAB and consequently UFRN by distance learning for undergraduate courses (Bachelor in Mathematics, Physics, Chemistry, Geography and Biological Sciences, beyond the bachelor's degrees in Business and Public Administration ) is related to increasing the number of vacancies and accessibility of a population that was previously deprived of access to university
Resumo:
With the disorganized decentralization occurred in Brazil after the 1988 Constitution, municipalities have risen to the level of federal entities. This phenomenon became known as "municipalism" also brought some negative effects such as low capacity financial, economic and political of these entities. In the face of this reality , the municipalities sought in models of collaborative features to address public policy issues ultrarregionais, one of these models are the Public Consortia. Characterized as the organization of all federal entities that aim to solve public policy implementation alone that they could not, or spend great resources for such. This reality of the municipalities have an aggravating factor when looking at the situation in Metropolitan Regions (MRs). This is because the RMs has a historical process of formation that does not encourage cooperation, since that were created top-down during the military regime. Furthermore, the metropolitan municipalities have significant power asymmetries, localist vision, rigidity earmarked revenues, different scenarios conurbation, difficulty standardization of concepts and others that contribute to the vision of low cooperation of these metropolitan areas. Thus, the problem of this work is in the presence of collaborative arrangements, such as the Public Consortia in metropolitan areas, which are seen as areas of low cooperation. To elucidate this research was used for analysis the cases of CONDIAM/PB and Consórcio Grande Recife/PE, because they are apparently antagonistic, but with some points of similarity. The cases has as foundation the Theory of Common Resources, which provides the possibility of collective action through the initiative of individuals. This theory has as its methodology for analyzing the picture IAD Framework, which proposes its analysis based on three axes: external variables, the arena of action and results. The nature of the method of this research was classified as exploratory and descriptive. For the stage of date analysis, was used the method of document analysis and content, Further than of separation of the cases according to theur especificities. At the end of the study, noted that the CONDIAM/PB was a strategy of municipal government of Joao Pessoa to attract funds from the Federal Government for the purpose of to build a landfill, and over the years the ideology of cooperation was left aside, the prevailing view localist municipalities. In the case of Consórcio Grande Recife/PE, members act with some degree of cooperation, especially the collaborative aspect of the region, however, still prevails with greater strength the power of the state of Pernambuco in the decisions and paths of the consortium. Thus, was conclude that the Public Consortia analyzed are an experience of collaborative arrangement, from the initiative of members, as the theory of common resources says, but has not actually signed as a practice of collective action to overcome the dilemmas faced by metropolitan areas
Resumo:
Los factores que inciden en el proceso de desarrollo de los territorios rurales y que explican el éxito o el fracaso de las estrategias impulsadas desde abajo (bottom-up) o inducidas desde arriba (top-down), han preocupado desde hace varias décadas a los analistas, que observan las limitaciones de los enfoques del “desarrollo territorial rural” para aprehender la complejidad de dicho proceso. Habiéndose centrado, sobre todo, en el ámbito de las políticas públicas y sus efectos en el desarrollo de los territorios rurales, los analistas del desarrollo han visto la necesidad de apoyarse en otras perspectivas que capten las dinámicas que acontecen en el ámbito de la sociedad civil local, tanto en lo que se refiere a las relaciones entre los diversos actores socioeconómicos e institucionales presentes en el territorio, como a su interacción con los organismos públicos encargados de implementar dichas políticas. El objetivo general de esta tesis doctoral ha sido analizar las dinámicas sociales que surgen en espacios naturales sometidos a políticas de gestión y regulación, mostrando el grado de influencia que tienen en la aplicación de esas políticas las diversas redes en que se organizan las poblaciones locales. De la investigación empírica realizada y de su integración en el marco teórico utilizado, hemos podido extraer resultados referidos a la realidad concreta y localizada de la REBISE que muestran cómo es que conciliar los objetivos de la “conservación” y el “desarrollo” en territorios poblados por comunidades locales estrechamente vinculadas a espacios naturales, exige abordar de forma integral los problemas ambientales, sociales y económicos. Tratar de alcanzar esos objetivos con políticas sectoriales conduce al fracaso de los programas de protección, ya que sólo se logran objetivos parciales y limitados. Por muy elevado que sea el valor ecológico de este tipo de espacios naturales y por muy alta que sea la protección que reciban por parte de los organismos internacionales (como ocurre con las “reservas de la biosfera” del programa MaB de la UNESCO), “conservar” estas áreas naturales no puede lograrse sin contar con la colaboración de las poblaciones locales. Esto exige combinar estrategias top-down y bottom-up buscando establecer sinergias entre los responsables públicos y los grupos sociales presentes en el territorio. De nuestra investigación se deduce la necesidad de empoderar a las comunidades locales para inducir en ellos un capital social tipo bridging dirigido a la construcción de un gran pacto territorial que trascienda los intereses particulares de cada grupo y que persiga el interés general del territorio en pro de la conservación de los recursos naturales y de la mejora del bienestar y calidad de vida de las familias que residen allí. Si no se hace así, continuarán promoviéndose proyectos “balsámicos” que paliarán a corto plazo algunos de los problemas de las poblaciones locales, pero que las mantendrán en el estancamiento y la pobreza.
Resumo:
Paper prepared by Marion Panizzon and Charlotte Sieber-Gasser for the International Conference on the Political Economy of Liberalising Trade in Services, Hebrew University of Jerusalem, 14-15 June 2010 Recent literature has shed light on the economic potential of cross-border networks. These networks, consisting of expatriates and their acquaintances from abroad and at home, provide the basis for the creation of cross-border value added chains and therewith the means for turning brain drain into brain circulation. Both aspects are potentially valuable for economic growth in the developing world. Unilateral co-development policies operating through co-funding of expatriate business ventures, but also bilateral agreements liberalising circular migration for a limited set of per-sons testify to the increasing awareness of governments about the potential, which expatriate networks hold for economic growth in developing countries. Whereas such punctual efforts are valuable, viewed from a long term perspective, these top-down, government mandated Diaspora stimulation programs, will not replace, this paper argues, the market-driven liberalisation of infrastructure and other services in developing countries. Nor will they carry, in the case of circular labour migration, the political momentum to liberalise labour market admission for those non-nationals, who will eventually emerge as the future transnational entrepreneurs. It will take a combination of mode 4 and infrastructure services openings-cum regulation for countries at both sides of the spectrum to provide the basis and precondition for transnational business and entrepreneurial networks to emerge and translate into cross-border, value added production chains. Two key issues are of particular relevance in this context: (i) the services sector, especially in infrastructure, tends to suffer from inefficiencies, particularly in developing countries, and (ii) labour migration, a highly complex issue, still faces disproportionately rigid barriers despite well-documented global welfare gains. Both are hindrances for emerging markets to fully take advantage of the potential of these cross-border networks. Adapting the legal framework for enhancing the regulatory and institutional frameworks for services trade, especially in infrastructure services sectors (ISS) and labour migration could provide the incentives necessary for brain circulation and strengthen cross-border value added chains by lowering transaction costs. This paper analyses the shortfalls of the global legal framework – the shallow status quo of GATS commitments in ISS and mode 4 particular – in relation to stimulating brain circulation and the creation of cross-border value added chains in emerging markets. It highlights the necessity of adapting the legal framework, both on the global and the regional level, to stimulate broader and wider market access in the four key ISS sectors (telecommunications, transport, professional and financial services) in developing countries, as domestic supply capacity, global competitiveness and economic diversification in ISS sectors are necessary for mobilising expatriate re-turns, both physical and virtual. The paper argues that industrialised, labour receiving countries need to offer mode 4 market access to wider categories of persons, especially to students, graduate trainees and young professionals from abroad. Further-more, free trade in semi-finished products and mode 4 market access are crucial for the creation of cross-border value added chains across the developing world. Finally, the paper discusses on the basis of a case study on Jordan why the key features of trade agreements, which promote circular migration and the creation of cross-border value added chains, consist of trade liberalisation in services and liberal migration policies.
Resumo:
La actual realidad socioeconómica, marcada por la (r)evolución tecnológica de los último años y la explosión demográfica y urbana, conlleva dos grandes problemas. Por un lado el cambio climático derivado de la sobreexplotación de los recursos y energías no-renovables, y por otro, la pérdida de las identidades y procesos culturales específicos provocada por la globalización. Ante ellos, diversos autores plantean sacar partido de las propias tecnologías y la nueva sociedad en red para dar una respuesta acorde al momento actual. Las herramientas computacionales permiten una mayor complejidad de los diseños alcanzando una optimización de recursos y procesos, minimizando su impacto ambiental. Frente a la producción en masa y la pérdida de identidad, el planteamiento informático de problemas globales permite pasar de la producción en masa del siglo pasado a la ‘customización’ en masa al dar respuestas específicas para cada contexto. Por otro lado es necesario que esos procesos computacionales conecten y hagan partícipes del diseño a los diferentes actores sociales implicados. Es por ello que esta investigación se basará en los patrones espaciales de Christopher Alexander y otros modelos algorítmicos de diseño por ordenador puesto que estos describen soluciones paramétricas a conflictos recurrentes de diseño de arquitectura. Su planteamiento permite que cada solución base genere respuestas específicas, a la vez que esta es corregida y optimizada por todos sus utilizadores al poder ser compartida digitalmente. Con ello se busca que el diseño de arquitectura responda a criterios objetivos basados en la experiencia y la crítica participativa y democrática basada en los patrones, de tal modo que los diseños no surjan de un planteamiento top-down impuesto y cerrado, sino que en ellos gane importancia la participación activa de los actores sociales implicados en la definición y uso de los mismos. Por último, esta investigación procura mostrar cómo los patrones pueden jugar un papel determinante en la conceptualización abstracta del diseño, mientras que otros métodos algorítmicos alcanzarán fases del proyecto más concretas. De este modo, los patrones digitales que se pretenden se centran en la customización del diseño, mientras que el uso que le dan otros autores persigue la optimización del mismo. Para ello la investigación recurrirá al análisis de los pabellones de verano de la Serpentine Gallery como casos de estudio en los que comprobar la repercusión de los patrones en el diseño de arquitectura actual y su posible adaptación al diseño paramétrico.
Resumo:
Sedentary consumers play an important role on populations of prey and, hence, their patterns of abundance, distribution and coexistence on shores are important to evaluate their potential influence on ecosystem dynamics. Here, we aimed to describe their spatio-temporal distribution and abundance in relation to wave exposure in the intertidal rocky shores of the south-west Atlantic to provide a basis for further understanding of ecological processes in this system. The abundance and composition of the functional groups of sessile organisms and sedentary consumers were taken by sampling the intertidal of sheltered and moderately exposed shores during a period of one year. The sublittoral fringe of sheltered areas was dominated by macroalgae, while the low midlittoral was dominated by bare rock and barnacles. In contrast, filter-feeding animals prevailed at exposed shores, probably explaining the higher abundance of the predator Stramonita haemastoma at these locations. Limpets were more abundant at the midlittoral zone of all shores while sea urchins were exclusively found at the sublittoral fringe of moderately exposed shores, therefore, adding grazing pressure on these areas. The results showed patterns of coexistence, distribution and abundance of those organisms in this subtropical area, presumably as a result of wave action, competition and prey availability. It also brought insights on the influence of top-down and bottom-up processes in this area.
Resumo:
Esta investigación toma como tema de referencia los costes sanitarios en originados en la Unidad de Partos del Servicio de Obstetricia y Ginecología de un Hospital de Nivel I. El objetivo general de la presente investigación se concreta en la definición y determinación de un conjunto de indicadores que permitan cuantificar el grado de eficiencia en la actividad sanitaria. Estos indicadores se construyen sobre variables representativas de coste, relativizadas por la actividad de la unidad, medida ésta por el número de casos atendidos. Otro indicador global que se podría haber utilizado es el número de estancias causadas, si bien hay razones, que se explican a lo largo del trabajo, que desaconsejan su utilización. El ámbito de estudio de este trabajo lo constituye la unidad de gestión clínica (servicio) de obstetricia y ginecología de un hospital de Nivel I de la Comunidad Autónoma de Andalucía y el año de referencia del estudio es 2005. Dentro de este servicio se ha centrado la atención en los partos atendidos, por la representatividad que los mismos tienen en la actividad de la unidad, desagregando la diferente tipología de aquellos mediante el uso de los correspondientes GRD. Las fuentes de información utilizadas han sido: • Conjunto Mínimo Básico de Datos (CMBD) • Grupos Relacionados con el Diagnóstico (GRD). • Relación de los GRD con el CMBD. • Cuadros de Mando Integrales del Centro Hospitalario. • Contabilidad Analítica del Hospital (COANHyD). • Contrato Programa del Centro Sanitario. • Instituto de Estadística de Andalucía (IEA). • Instituto Nacional de Estadística de España (INE). • Sistema Estadístico Europeo (EUROSTAT). Una vez se han determinado los costes controlables para todas las categorías y para cada uno de los GRD de partos, se procede a la determinación de una serie de indicadores que van a ser de gran utilidad para las conclusiones de la investigación, en cuanto que van a proporcionar una información determinante para los responsables de las unidades en la búsqueda de la eficiencia en la aplicación de los recursos. De igual manera van a ser útiles para establecer comparaciones con otras unidades, ya sean del mismo centro hospitalario o de otros centros. Si se hace una revisión de la literatura, la mayoría de los indicadores de eficiencia se formalizan mediante un cociente en el que el numerador representa una variable de coste y el denominador una variable de actividad, identificándose esta última por los casos tratados o las estancias causadas. La propuesta que se hace en la presente investigación es la de aplicar la primera de las alternativas enunciadas, es decir, el coste por caso, ya que las estancias causadas, aunque no han intervenido activamente en aquellos costes que se han estimado siguiendo la estrategia bottom up, sí se han utilizado, por así establecerlo la contabilidad analítica, a la hora de determinar los costes estimados mediante el modelo top down, como es el caso de todos los costes no controlables. Además del coste por caso que, como se ha dicho, es uno de los indicadores de eficiencia más citados en la literatura, podrían utilizarse otros indicadores como el coste por producción ajustada, que se define como el cociente entre los costes de explotación, que son los que se han contemplado en la presente investigación, y el número de altas ajustado por el peso relativo del correspondiente GRD. El coste por producción ajustada puede parecer más preciso. Sin embargo, depende mucho de la homogeneidad existente en la definición de las patologías que conforman cada GRD, es decir, depende del grado de variabilidad intra GRD. Como quiera que no es fácil lograr dicha homogeneidad, la ventaja apuntada puede compensarse en cierta medida con el inconveniente de mostrar una menor neutralidad. Si se analiza su distribución por GRD, puede observarse que el coste correspondiente a los GRD 371, 372 y 373 conforma más del 70% del coste total. El correspondiente a los GRD 370 y 651 alrededor del 19% del total. Les siguen en importancia el GRD 650 con un 5%, el GRD 375 con alrededor del 3%, repartiéndose el 2% restante los GRD 374 y 652. Si no distinguimos entre costes controlables y no controlables, el total de todos los costes en que incurre la unidad objeto de estudio en el año 2005 asciende a 16.956.541 euros, de los que el 57,18% se corresponde con los costes controlables, quedando el 42,82% restante para los no controlables. Por conceptos, el mayor peso relativo en la estructura de costes, lo tienen los costes del personal adscrito al servicio, con un 48,16%, siendo de un 29,88% el correspondiente al resto del personal, conformando entre ambos algo más del 78% del coste total. Le sigue en importancia el concepto de varios con una aportación del 6,52%, ello debido fundamentalmente al coste de los set de esterilización, lavandería y lencería que aportan un 61 y 34% del total del concepto. En cuanto a gestoría de usuarios y tributos, su importancia es residual. En tercer lugar tenemos a las contratas, con un 4,04% del coste total, destacando entre las mismas la contrata de la limpieza que supone un 70% del total del concepto. El cuarto lugar lo ocupan las determinaciones analíticas con un 3,37% del peso relativo, destacando entre ellas las de bioquímica con un 34,26%, seguidas de las de inmunología con un 30,25% y los hemogramas con un 18,79%, conformando entre las tres algo más del 83% del total. La aportación de alimentación, material fungible y consumos, es un 2,61%, un 2,55% y un 2,38% respectivamente, destacando en importancia el peso del material de curas, que supone el 66,58% del total del concepto, los consumos de electricidad con un 36,91% del total y el oxigeno que constituye el 28,68% de dicho total. El menor peso relativo, casi residual, se corresponde con los costes de farmacia, con una participación del 0,49% del coste total.
Resumo:
Top-down (grazing) and bottom-up (nutrient, light) controls are important in freshwater ecosystems regulation. Relative importance of these factors could change in space and time, but in tropical lakes bottom-up regulation has to been appointed as more influent. Present study aimed to test the hypothesis that phytoplankton growths rate in Armando Ribeiro reservoir, a huge eutrophic reservoir in semi-arid region of Rio Grande do Norte state, is more limited by nutrient available then zooplankton grazing pressure. Bioassay was conduced monthly from September (2008) to August (2009) manipulating two levels of nutrients (with/without addition) and two level of grazers (with/without removal). Experimental design was factorial 2X2 with four treatments (X5), (i) control with water and zooplankton from natural spot ( C ), (ii) with nutrient addition ( +NP ), (iii) with zooplankton remove ( -Z ) and (iv) with zooplankton remove and nutrient addition ( -Z+NP ). For bioassay confection transparent plastic bottles (500ml) was incubate for 4 or 5 days in two different depths, Secchi`s depth (high luminosity) and 3 times Secchi`s depth (low luminosity). Water samples were collected from each bottle in begins and after incubates period for chlorophyll a concentration analysis and zoopalnktonic organisms density. Phytoplankton growths rates were calculated. Bifactorial ANOVA was performance to test if had a significant effect (p<0,005) of nutrient addition and grazers remove as well a significant interaction between factors on phytoplankton growths rates. Effect magnitude was calculated the relative importance of each process. Results show that phytoplankton growth was in generally stimulated by nutrient addition, as while zooplankton remove rarely stimulated phytoplankton growth. Some significant interactions happening between nutrient additions and grazers remove on phytoplankton growth. In conclusion this study suggests that in studied reservoir phytoplankton growth is more controlled by ascendent factors than descendent