975 resultados para Flooding problem in the fields
Resumo:
The dynamics of forests subject to inundation appears to be strongly influenced by the frequency and intensity of natural disturbances such as flooding. In a late successional tidal floodplain forest near the Amazon port of Belém, Brazil, we tested this prediction by measuring seasonal patterns of phenology and litterfall in relation to two key variables: rainfall and tide levels. In addition, we estimated the root biomass and the annual growth of the forest community by measuring stem increments over time. Our results showed high correlations between phenological events (flowering and fruiting) and rainfall and tide levels, while correlations between litterfall and these variations were generally weaker. Contrary to our prediction, root biomass to 1 m depth showed no significant differences along the topographic gradient, and the root biomass at all topographic levels was low to intermediate compared with other neotropical forests. Both litterfall and total stem increment were high compared to other tropical forest, indicating the high productivity of this ecosystem.
Resumo:
The sustainable management of municipal solid waste in the Kathmandu Valley has always been a challenging task. Solid waste generation has gone rapidly high in the Kathmandu Valley over the last decade due to booming population and rapid urbaniza-tion. Finding appropriate landfill sites for the disposal of solid wastes generated from the households of the Kathmandu Valley has always been a major problem for Nepalese government. 65 % of total generated wastes from the households of Nepal consist of organic materials. As large fractions of generated household wastes are organic in na-ture, composting can be considered as one of the best sustainable ways to recycle organ-ic wastes generated from the households of Nepal. Model Community Society Development (MCDS), a non-governmental organization of Nepal carried out its small-scale project in five households of the Kathmandu Valley by installing composting reactors. This thesis is based on this small-scale project and has used secondary data provided by MCDS Nepal for carrying out the study. Proper man-agement of organic wastes can be done at household levels through the use of compost-ing reactors. The end product compost can be used as soil conditioners for agricultural purposes such as organic farming, roof-top farming and gardening. The overall average organic waste generation in the Kathmandu Valley is found to be 0,23 kg/person/day and the total amount of organic household wastes generated in the Kathmandu Valley is around 210 Gg/yr. Produced composts from five composting reac-tors contain high amount of moistures but have sufficient amount of nutrients required for the fertility of land and plant growth. Installation of five composting reactors in five households have prevented 2,74 Mg of organic wastes going into the landfills, thus re-ducing 107 kg of methane emissions which is equivalent to 2,7 Mg of carbondioxide.
Resumo:
As the rapid development of the society as well as the lifestyle, the generation of commercial waste is getting more complicated to control. The situation of packaging waste and food waste – the main fractions of commercial waste in different countries in Europe and Asia is analyzed in order to evaluate and suggest necessary improvements for the existing waste management system in the city of Hanoi, Vietnam. From all waste generation sources of the city, a total amount of approximately 4000 tons of mixed waste is transported to the composting facility and the disposal site, which emits a huge amount of 1,6Mt of GHG emission to the environment. Recycling activity is taking place spontaneously by the informal pickers, leads to the difficulty in managing the whole system and uncertainty of the overall data. With a relative calculation, resulting in only approximately 0,17Mt CO2 equivalent emission, incinerator is suggested to be the solution of the problem with overloaded landfill and raising energy demand within the inhabitants.
Resumo:
Open innovation paradigm states that the boundaries of the firm have become permeable, allowing knowledge to flow inwards and outwards to accelerate internal innovations and take unused knowledge to the external environment; respectively. The successful implementation of open innovation practices in firms like Procter & Gamble, IBM, and Xerox, among others; suggest that it is a sustainable trend which could provide basis for achieving competitive advantage. However, implementing open innovation could be a complex process which involves several domains of management; and whose term, classification, and practices have not totally been agreed upon. Thus, with many possible ways to address open innovation, the following research question was formulated: How could Ericsson LMF assess which open innovation mode to select depending on the attributes of the project at hand? The research followed the constructive research approach which has the following steps: find a practical relevant problem, obtain general understanding of the topic, innovate the solution, demonstrate the solution works, show theoretical contributions, and examine the scope of applicability of the solution. The research involved three phases of data collection and analysis: Extensive literature review of open innovation, strategy, business model, innovation, and knowledge management; direct observation of the environment of the case company through participative observation; and semi-structured interviews based of six cases involving multiple and heterogeneous open innovation initiatives. Results from the cases suggest that the selection of modes depend on multiple reasons, with a stronger influence of factors related to strategy, business models, and resources gaps. Based on these and others factors found in the literature review and observations; it was possible to construct a model that supports approaching open innovation. The model integrates perspectives from multiple domains of the literature review, observations inside the case company, and factors from the six open innovation cases. It provides steps, guidelines, and tools to approach open innovation and assess the selection of modes. Measuring the impact of open innovation could take years; thus, implementing and testing entirely the model was not possible due time limitation. Nevertheless, it was possible to validate the core elements of the model with empirical data gathered from the cases. In addition to constructing the model, this research contributed to the literature by increasing the understanding of open innovation, providing suggestions to the case company, and proposing future steps.
Resumo:
In this paper, the topology of cortical visuotopic maps in adult primates is reviewed, with emphasis on recent studies. The observed visuotopic organisation can be summarised with reference to two basic rules. First, adjacent radial columns in the cortex represent partially overlapping regions of the visual field, irrespective of whether these columns are part of the same or different cortical areas. This primary rule is seldom, if ever, violated. Second, adjacent regions of the visual field tend to be represented in adjacent radial columns of a same area. This rule is not as rigid as the first, as many cortical areas form discontinuous, second-order representations of the visual field. A developmental model based on these physiological observations, and on comparative studies of cortical organisation, is then proposed, in order to explain how a combination of molecular specification steps and activity-driven processes can generate the variety of visuotopic organisations observed in adult cortex.
Resumo:
JNK1 is a MAP-kinase that has proven a significant player in the central nervous system. It regulates brain development and the maintenance of dendrites and axons. Several novel phosphorylation targets of JNK1 were identified in a screen performed in the Coffey lab. These proteins were mainly involved in the regulation of neuronal cytoskeleton, influencing the dynamics and stability of microtubules and actin. These structural proteins form the dynamic backbone for the elaborate architecture of the dendritic tree of a neuron. The initiation and branching of the dendrites requires a dynamic interplay between the cytoskeletal building blocks. Both microtubules and actin are decorated by associated proteins which regulate their dynamics. The dendrite-specific, high molecular weight microtubule associated protein 2 (MAP2) is an abundant protein in the brain, the binding of which stabilizes microtubules and influences their bundling. Its expression in non-neuronal cells induces the formation of neurite-like processes from the cell body, and its function is highly regulated by phosphorylation. JNK1 was shown to phosphorylate the proline-rich domain of MAP2 in vivo in a previous study performed in the group. Here we verify three threonine residues (T1619, T1622 and T1625) as JNK1 targets, the phosphorylation of which increases the binding of MAP2 to microtubules. This binding stabilizes the microtubules and increases process formation in non-neuronal cells. Phosphorylation-site mutants were engineered in the lab. The non-phosphorylatable mutant of MAP2 (MAP2- T1619A, T1622A, T1625A) in these residues fails to bind microtubules, while the pseudo-phosphorylated form, MAP2- T1619D, T1622D, Thr1625D, efficiently binds and induces process formation even without the presence of active JNK1. Ectopic expression of the MAP2- T1619D, T1622D, Thr1625D in vivo in mouse brain led to a striking increase in the branching of cortical layer 2/3 (L2/3) pyramidal neurons, compared to MAP2-WT. The dendritic complexity defines the receptive field of a neuron and dictates the output to the postsynaptic cells. Previous studies in the group indicated altered dendrite architecture of the pyramidal neurons in the Jnk1-/- mouse motor cortex. Here, we used Lucifer Yellow loading and Sholl analysis of neurons in order to study the dendritic branching in more detail. We report a striking, opposing effect in the absence of Jnk1 in the cortical layers 2/3 and 5 of the primary motor cortex. The basal dendrites of pyramidal neurons close to the pial surface at L2/3 show a reduced complexity. In contrast, the L5 neurons, which receive massive input from the L2/3 neurons, show greatly increased branching. Another novel substrate identified for JNK1 was MARCKSL1, a protein that regulates actin dynamics. It is highly expressed in neurons, but also in various cancer tissues. Three phosphorylation target residues for JNK1 were identified, and it was demonstrated that their phosphorylation reduces actin turnover and retards migration of these cells. Actin is the main cytoskeletal component in dendritic spines, the site of most excitatory synapses in pyramidal neurons. The density and gross morphology of the Lucifer Yellow filled dendrites were characterized and we show reduced density and altered morphology of spines in the motor cortex and in the hippocampal area CA3. The dynamic dendritic spines are widely considered to function as the cellular correlate during learning. We used a Morris water maze to test spatial memory. Here, the wild-type mice outperformed the knock-out mice during the acquisition phase of the experiment indicating impaired special memory. The L5 pyramidal neurons of the motor cortex project to the spinal cord and regulate the movement of distinct muscle groups. Thus the altered dendrite morphology in the motor cortex was expected to have an effect on the input-output balance in the signaling from the cortex to the lower motor circuits. A battery of behavioral tests were conducted for the wild-type and Jnk1-/- mice, and the knock-outs performed poorly compared to wild-type mice in tests assessing balance and fine motor movements. This study expands our knowledge of JNK1 as an important regulator of the dendritic fields of neurons and their manifestations in behavior.
Resumo:
Hydration is recommended in order to decrease the overload on the cardiovascular system when healthy individuals exercise, mainly in the heat. To date, no criteria have been established for hydration for hypertensive (HY) individuals during exercise in a hot environment. Eight male HY volunteers without another medical problem and 8 normal (NO) subjects (46 ± 3 and 48 ± 1 years; 78.8 ± 2.5 and 79.5 ± 2.8 kg; 171 ± 2 and 167 ± 1 cm; body mass index = 26.8 ± 0.7 and 28.5 ± 0.6 kg/m²; resting systolic (SBP) = 142.5 and 112.5 mmHg and diastolic blood pressure (DBP) = 97.5 and 78.1 mmHg, respectively) exercised for 60 min on a cycle ergometer (40% of VO2peak) with (500 ml 2 h before and 115 ml every 15 min throughout exercise) or without water ingestion, in a hot humid environment (30ºC and 85% humidity). Rectal (Tre) and skin (Tsk) temperatures, heart rate (HR), SBP, DBP, double product (DP), urinary volume (Vu), urine specific gravity (Gu), plasma osmolality (Posm), sweat rate (S R), and hydration level were measured. Data were analyzed using ANOVA in a split plot design, followed by the Newman-Keuls test. There were no differences in Vu, Posm, Gu and S R responses between HY and NO during heat exercise with or without water ingestion but there was a gradual increase in HR (59 and 51%), SBP (18 and 28%), DP (80 and 95%), Tre (1.4 and 1.3%), and Tsk (6 and 3%) in HY and NO, respectively. HY had higher HR (10%), SBP (21%), DBP (20%), DP (34%), and Tsk (1%) than NO during both experimental situations. The exercise-related differences in SBP, DP and Tsk between HY and NO were increased by water ingestion (P < 0.05). The results showed that cardiac work and Tsk during exercise were higher in HY than in NO and the difference between the two groups increased even further with water ingestion. It was concluded that hydration protocol recommended for NO during exercise could induce an abnormal cardiac and thermoregulatory responses for HY individuals without drug therapy.
Resumo:
Genomics is expanding the horizons of epidemiology, providing a new dimension for classical epidemiological studies and inspiring the development of large-scale multicenter studies with the statistical power necessary for the assessment of gene-gene and gene-environment interactions in cancer etiology and prognosis. This paper describes the methodology of the Clinical Genome of Cancer Project in São Paulo, Brazil (CGCP), which includes patients with nine types of tumors and controls. Three major epidemiological designs were used to reach specific objectives: cross-sectional studies to examine gene expression, case-control studies to evaluate etiological factors, and follow-up studies to analyze genetic profiles in prognosis. The clinical groups included patients' data in the electronic database through the Internet. Two approaches were used for data quality control: continuous data evaluation and data entry consistency. A total of 1749 cases and 1509 controls were entered into the CGCP database from the first trimester of 2002 to the end of 2004. Continuous evaluation showed that, for all tumors taken together, only 0.5% of the general form fields still included potential inconsistencies by the end of 2004. Regarding data entry consistency, the highest percentage of errors (11.8%) was observed for the follow-up form, followed by 6.7% for the clinical form, 4.0% for the general form, and only 1.1% for the pathology form. Good data quality is required for their transformation into useful information for clinical application and for preventive measures. The use of the Internet for communication among researchers and for data entry is perhaps the most innovative feature of the CGCP. The monitoring of patients' data guaranteed their quality.
Resumo:
Questions concerning perception are as old as the field of philosophy itself. Using the first-person perspective as a starting point and philosophical documents, the study examines the relationship between knowledge and perception. The problem is that of how one knows what one immediately perceives. The everyday belief that an object of perception is known to be a material object on grounds of perception is demonstrated as unreliable. It is possible that directly perceived sensible particulars are mind-internal images, shapes, sounds, touches, tastes and smells. According to the appearance/reality distinction, the world of perception is the apparent realm, not the real external world. However, the distinction does not necessarily refute the existence of the external world. We have a causal connection with the external world via mind-internal particulars, and therefore we have indirect knowledge about the external world through perceptual experience. The research especially concerns the reasons for George Berkeley’s claim that material things are mind-dependent ideas that really are perceived. The necessity of a perceiver’s own qualities for perceptual experience, such as mind, consciousness, and the brain, supports the causal theory of perception. Finally, it is asked why mind-internal entities are present when perceiving an object. Perception would not directly discern material objects without the presupposition of extra entities located between a perceiver and the external world. Nevertheless, the results show that perception is not sufficient to know what a perceptual object is, and that the existence of appearances is necessary to know that the external world is being perceived. However, the impossibility of matter does not follow from Berkeley’s theory. The main result of the research is that singular knowledge claims about the external world never refer directly and immediately to the objects of the external world. A perceiver’s own qualities affect how perceptual objects appear in a perceptual situation.
Resumo:
Electrical stimulation has been used for more than 100 years in neuroscientific and biomedical research as a powerful tool for controlled perturbations of neural activity. Despite quickly driving neuronal activity, this technique presents some important limitations, such as the impossibility to activate or deactivate specific neuronal populations within a single stimulation site. This problem can be avoided by pharmacological methods based on the administration of receptor ligands able to cause specific changes in neuronal activity. However, intracerebral injections of neuroactive molecules inherently confound the dynamics of drug diffusion with receptor activation. Caged compounds have been proposed to circumvent this problem, for spatially and temporally controlled release of molecules. Caged compounds consist of a protecting group and a ligand made inactive by the bond between the two parts. By breaking this bond with light of an appropriate wavelength, the ligand recovers its activity within milliseconds. To test these compounds in vivo, we recorded local field potentials (LFPs) from the cerebral cortex of anesthetized female mice (CF1, 60-70 days, 20-30 g) before and after infusion with caged γ-amino-butyric-acid (GABA). After 30 min, we irradiated the cortical surface with pulses of blue light in order to photorelease the caged GABA and measure its effect on global brain activity. Laser pulses significantly and consistently decreased LFP power in four different frequency bands with a precision of few milliseconds (P < 0.000001); however, the inhibitory effects lasted several minutes (P < 0.0043). The technical difficulties and limitations of neurotransmitter photorelease are presented, and perspectives for future in vivo applications of the method are discussed.
Resumo:
The field of vaccinology was born from the observations by the fathers of vaccination, Edward Jenner and Louis Pasteur, that a permanent, positive change in the way our bodies respond to life-threatening infectious diseases can be obtained by specific challenge with the inactivated infectious agent performed in a controlled manner, avoiding the development of clinical disease upon exposure to the virulent pathogen. Many of the vaccines still in use today were developed on an empirical basis, essentially following the paradigm established by Pasteur, “isolate, inactivate, and inject” the disease-causing microorganism, and are capable of eliciting uniform, long-term immune memory responses that constitute the key to their proven efficacy. However, vaccines for pathogens considered as priority targets of public health concern are still lacking. The literature tends to focus more often on vaccine research problems associated with specific pathogens, but it is increasingly clear that there are common bottlenecks in vaccine research, which need to be solved in order to advance the development of the field as a whole. As part of a group of articles, the objective of the present report is to pinpoint these bottlenecks, exploring the literature for common problems and solutions in vaccine research applied to different situations. Our goal is to stimulate brainstorming among specialists of different fields related to vaccine research and development. Here, we briefly summarize the topics we intend to deal with in this discussion.
Resumo:
Negotiating trade agreements is an important part of government trade policies, economic planning and part of the globally operating trading system of today. European Union and the United States have been active in the formation of trade agreements in global comparison. Now these two economic giants are engaged in negotiations to form their own trade agreement, the so called Transnational Trade and Investment Partnership (TTIP). The purpose of this thesis is to understand the reasons for making a trade agreement between two economic areas and understanding the issues it may include in the case of the TTIP. The TTIP has received a great deal of attention in the media. The opinions towards the partnership have been extreme, and the debate has been heated. The purpose of this study is to introduce the nature of the public discussion regarding the TTIP from Spring 2013 until 2014. The research problem is to find out what are the main issues in the agreement and what are the values influencing them. The study was conducted applying methods of critical discourse analysis to the chosen data. This includes gathering the issues from the data based on the attention each has received in the discussion. The underlying motives for raising different issues were analysed by investigating the authors’ position in the political, economic and social circuits. The perceived economic impacts of the TTIP are also under analysis with the same criteria. Some of the most respected economic newspapers globally were included in the research material as well as papers or reports published by the EU and global organisations. The analysis indicates a clear dichotomy of the attitudes towards the TTIP. Key problems include lack of transparency in the negotiations, the misunderstood investor-state dispute settlement, the constantly expanding regulatory issues and the risk of protectionism. The theory and data does suggest that the removal of tariffs is an effective tool for reaching economic gains in the TTIP and even more effective would be the reducing of non-tariff barriers, such as protectionism. Critics are worried over the rising influence of corporations over governments. The discourse analysis reveals that the supporters of the TTIP have values related to increasing welfare through economic growth. Critics do not deny the economic benefits but raise the question of inequality as a consequence. Overall they represent softer values such as sustainable development and democracy as a counter-attack to the corporate values of efficiency and the maximising of profits.
Resumo:
Globalization and interconnectedness in the worldwide sphere have changed the existing and prevailing modus operandi of organizations around the globe and have challenged existing practices along with the business as usual mindset. There are no rules in terms of creating a competitive advantage and positioning within an unstable, constantly changing and volatile globalized business environment. The financial industry, the locomotive or the flagship industry of global economy, especially, within the aftermath of the financial crisis, has reached a certain point trying to recover and redefine its strategic orientation and positioning within the global business arena. Innovation has always been a trend and a buzzword and by many has been considered as the ultimate answer to any kind of problem. The mantra Innovate or Die has been prevailing in any organizational entity in a, sometimes, ruthless endeavour to develop cutting-edge products and services and capture a landmark position in the market. The emerging shift from a closed to an open innovation paradigm has been considered as new operational mechanism within the management and leadership of the company of the future. To that respect, open innovation has been experiencing a tremendous growth research trajectory by putting forward a new way of exchanging and using surplus knowledge in order to sustain innovation within organizations and in the level of industry. In the abovementioned reality, there seems to be something missing: the human element. This research, by going beyond the traditional narratives for open innovation, aims at making an innovative theoretical and managerial contribution developed and grounded on the on-going discussion regarding the individual and organizational barriers to open innovation within the financial industry. By functioning across disciplines and researching out to primary data, it debunks the myth that open innovation is solely a knowledge inflow and outflow mechanism and sheds light to the understanding on the why and the how organizational open innovation works by enlightening the broader dynamics and underlying principles of this fascinating paradigm. Little attention has been given to the role of the human element, the foundational pre-requisite of trust encapsulated within the precise and fundamental nature of organizing for open innovation, the organizational capabilities, the individual profiles of open innovation leaders, the definition of open innovation in the realms of the financial industry, the strategic intent of the financial industry and the need for nurturing a societal impact for human development. To that respect, this research introduces the trust-embedded approach to open innovation as a new insightful way of organizing for open innovation. It unveils the peculiarities of the corporate and individual spheres that act as a catalyst towards the creation of productive open innovation activities. The incentive of this research captures the fundamental question revolving around the need for financial institutions to recognise the importance for organizing for open innovation. The overarching question is why and how to create a corporate culture of openness in the financial industry, an organizational environment that can help open innovation excel. This research shares novel and cutting edge outcomes and propositions both under the prism of theory and practice. The trust-embedded open innovation paradigm captures the norms and narratives around the way of leading open innovation within the 21st century by cultivating a human-centricity mindset that leads to the creation of human organizations, leaving behind the dehumanization mindset currently prevailing within the financial industry.
Resumo:
The increased awareness and evolved consumer habits have set more demanding standards for the quality and safety control of food products. The production of foodstuffs which fulfill these standards can be hampered by different low-molecular weight contaminants. Such compounds can consist of, for example residues of antibiotics in animal use or mycotoxins. The extremely small size of the compounds has hindered the development of analytical methods suitable for routine use, and the methods currently in use require expensive instrumentation and qualified personnel to operate them. There is a need for new, cost-efficient and simple assay concepts which can be used for field testing and are capable of processing large sample quantities rapidly. Immunoassays have been considered as the golden standard for such rapid on-site screening methods. The introduction of directed antibody engineering and in vitro display technologies has facilitated the development of novel antibody based methods for the detection of low-molecular weight food contaminants. The primary aim of this study was to generate and engineer antibodies against low-molecular weight compounds found in various foodstuffs. The three antigen groups selected as targets of antibody development cause food safety and quality defects in wide range of products: 1) fluoroquinolones: a family of synthetic broad-spectrum antibacterial drugs used to treat wide range of human and animal infections, 2) deoxynivalenol: type B trichothecene mycotoxin, a widely recognized problem for crops and animal feeds globally, and 3) skatole, or 3-methyindole is one of the two compounds responsible for boar taint, found in the meat of monogastric animals. This study describes the generation and engineering of antibodies with versatile binding properties against low-molecular weight food contaminants, and the consecutive development of immunoassays for the detection of the respective compounds.
Resumo:
The credibility of the rules and the elements of power constitute fundamental keys in the analysis of the political institutions. This paper opens the "black box" of the European Union institutions and analyses the problem of credibility in the commitment of the Stability and Growth pact (SGP). This Pact (SGP) constituted a formal rule that tried to enforce budgetary discipline on the European States. Compliance with this contract could be ensured by the existence of "third party enforcement" or by the coincidence of the ex-ante and ex-post interests of the States (reputational capital). The fact is that states such as France or Germany failed to comply with the ruling and managed to avoid the application of sanctions. This article studies the transactions and the hierarchy of power that exists in the European institutions, and analyses the institutional framework included in the new European Constitution.