933 resultados para Set of Weak Stationary Dynamic Actions
Resumo:
Most ecosystems undergo substantial variation over the seasons, ranging from changes in abiotic features, such as temperature, light and precipitation, to changes in species abundance and composition. How seasonality varies along latitudinal gradients is not well known in freshwater ecosystems, despite being very important in predicting the effects of climate change and in helping to advance ecological understanding. Stream temperature is often well correlated with air temperature and influences many ecosystem features such as growth and metabolism of most aquatic organisms. We evaluated the degree of seasonality in ten river mouths along a latitudinal gradient for a set of variables, ranging from air and water temperatures, to physical and chemical properties of water and growth of an invasive fish species (eastern mosquitofish, Gambusia holbrooki ). Our results show that although most of the variation in air temperature was explained by latitude and season, this was not the case for water features, including temperature, in lowland Mediterranean streams, which depended less on season and much more on local factors. Similarly, although there was evidence of latitude-dependent seasonality in fish growth, the relationship was nonlinear and weak and the significant latitudinal differences in growth rates observed during winter were compensated later in the year and did not result in overall differences in size and growth. Our results suggest that although latitudinal differences in air temperature cascade through properties of freshwater ecosystems, local factors and complex interactions often override the water temperature variation with latitude and might therefore hinder projections of species distribution models and effects of climate change
Resumo:
The objective of this paper was to show the potential additional insight that result from adding greenhouse gas (GHG) emissions to plant performance evaluation criteria, such as effluent quality (EQI) and operational cost (OCI) indices, when evaluating (plant-wide) control/operational strategies in wastewater treatment plants (WWTPs). The proposed GHG evaluation is based on a set of comprehensive dynamic models that estimate the most significant potential on-site and off-site sources of CO2, CH4 and N2O. The study calculates and discusses the changes in EQI, OCI and the emission of GHGs as a consequence of varying the following four process variables: (i) the set point of aeration control in the activated sludge section; (ii) the removal efficiency of total suspended solids (TSS) in the primary clarifier; (iii) the temperature in the anaerobic digester; and (iv) the control of the flow of anaerobic digester supernatants coming from sludge treatment. Based upon the assumptions built into the model structures, simulation results highlight the potential undesirable effects of increased GHG production when carrying out local energy optimization of the aeration system in the activated sludge section and energy recovery from the AD. Although off-site CO2 emissions may decrease, the effect is counterbalanced by increased N2O emissions, especially since N2O has a 300-fold stronger greenhouse effect than CO2. The reported results emphasize the importance and usefulness of using multiple evaluation criteria to compare and evaluate (plant-wide) control strategies in a WWTP for more informed operational decision making
Resumo:
The aim of the present dissertation is to investigate the marketing culture of research libraries in Finland and to understand the awareness of the knowledge base of library management concerning modern marketing theories and practices. The study was based onthe notion that a leader in an organisation can have large impact on its culture. Therefore, it was considered important to learn about the market orientation that initiates at the top management and flows throughout the whole organisationthus resulting in a particular kind of library culture. The study attempts to examine the marketing culture of libraries by analysing the marketing attitudes, knowledge (underlying beliefs, values and assumptions), behaviour (market orientation), operational policies and activities, and their service performance (customer satisfaction). The research was based on the assumption that if the top management of libraries has market oriented behaviour, then their marketing attitudes, knowledge, operational policies and activities and service performance should also be in accordance. The dissertation attempts to connect all these theoretical threads of marketing culture. It investigates thirty three academic and special libraries in the south of Finland. The library director and three to ten customers from each library participated as respondents in this study. An integrated methodological approach of qualitative as well as quantitative methods was used to gain knowledge on the pertinent issues lying behind the marketing culture of research libraries. The analysis of the whole dissertation reveals that the concept of marketing has very varied status in the Finnish research libraries. Based on the entire findings, three kinds of marketing cultures were emerged: the strong- the high fliers; the medium- the brisk runners; and the weak- the slow walkers. The high fliers appeared to be modern marketing believers as their marketing approach was customer oriented and found to be closer to the emerging notions of contemporary relational marketing. The brisk runners were found to be traditional marketing advocates as their marketing approach is more `library centred¿than customer defined and thus is in line of `product orientation¿ i.e. traditional marketing. `Let the interested customers come to the library¿ was appeared to be the hallmark of the slow walkers. Application of conscious market orientation is not reflected in the library activities of the slow walkers. Instead their values, ideology and approach to serving the library customers is more in tuneof `usual service oriented Finnish way¿. The implication of the research is that it pays to be market oriented which results in higher customer satisfaction oflibraries. Moreover, it is emphasised that the traditional user based service philosophy of Finnish research libraries should not be abandoned but it needs to be further developed by building a relational based marketing system which will help the libraries to become more efficient and effective from the customers¿ viewpoint. The contribution of the dissertation lies in the framework showing the linkages between the critical components of the marketing culture of a library: antecedents, market orientation, facilitators and consequences. The dissertationdelineates the significant underlying dimensions of market-oriented behaviour of libraries which are namely customer philosophy, inter-functional coordination,strategic orientation, responsiveness, pricing orientation and competition orientation. The dissertation also showed the extent to which marketing attitudes, behaviour, knowledge were related and impact of market orientation on the serviceperformance of libraries. A strong positive association was found to exist between market orientation and marketing attitudes and knowledge. Moreover, it also shows that a higher market orientation is positively connected with the service performance of libraries, the ultimate result being higher customer satisfaction. The analysis shows that a genuine marketing culture represents a synthesis of certain marketing attitudes, knowledge and of selective practices. This finding is particularly significant in the sense that it manifests that marketing culture consists of a certain sets of beliefs and knowledge (which form a specific attitude towards marketing) and implementation of a certain set of activities that actually materialize the attitude of marketing into practice (market orientation) leading to superior service performance of libraries.
Resumo:
Early identification of beginning readers at risk of developing reading and writing difficulties plays an important role in the prevention and provision of appropriate intervention. In Tanzania, as in other countries, there are children in schools who are at risk of developing reading and writing difficulties. Many of these children complete school without being identified and without proper and relevant support. The main language in Tanzania is Kiswahili, a transparent language. Contextually relevant, reliable and valid instruments of identification are needed in Tanzanian schools. This study aimed at the construction and validation of a group-based screening instrument in the Kiswahili language for identifying beginning readers at risk of reading and writing difficulties. In studying the function of the test there was special interest in analyzing the explanatory power of certain contextual factors related to the home and school. Halfway through grade one, 337 children from four purposively selected primary schools in Morogoro municipality were screened with a group test consisting of 7 subscales measuring phonological awareness, word and letter knowledge and spelling. A questionnaire about background factors and the home and school environments related to literacy was also used. The schools were chosen based on performance status (i.e. high, good, average and low performing schools) in order to include variation. For validation, 64 children were chosen from the original sample to take an individual test measuring nonsense word reading, word reading, actual text reading, one-minute reading and writing. School marks from grade one and a follow-up test half way through grade two were also used for validation. The correlations between the results from the group test and the three measures used for validation were very high (.83-.95). Content validity of the group test was established by using items drawn from authorized text books for reading in grade one. Construct validity was analyzed through item analysis and principal component analysis. The difficulty level of most items in both the group test and the follow-up test was good. The items also discriminated well. Principal component analysis revealed one powerful latent dimension (initial literacy factor), accounting for 93% of the variance. This implies that it could be possible to use any set of the subtests of the group test for screening and prediction. The K-Means cluster analysis revealed four clusters: at-risk children, strugglers, readers and good readers. The main concern in this study was with the groups of at-risk children (24%) and strugglers (22%), who need the most assistance. The predictive validity of the group test was analyzed by correlating the measures from the two school years and by cross tabulating grade one and grade two clusters. All the correlations were positive and very high, and 94% of the at-risk children in grade two were already identified in the group test in grade one. The explanatory power of some of the home and school factors was very strong. The number of books at home accounted for 38% of the variance in reading and writing ability measured by the group test. Parents´ reading ability and the support children received at home for schoolwork were also influential factors. Among the studied school factors school attendance had the strongest explanatory power, accounting for 21% of the variance in reading and writing ability. Having been in nursery school was also of importance. Based on the findings in the study a short version of the group test was created. It is suggested for use in the screening processes in grade one aiming at identifying children at risk of reading and writing difficulties in the Tanzanian context. Suggestions for further research as well as for actions for improving the literacy skills of Tanzanian children are presented.
Resumo:
Kuumahiertoprosessi on erittäin energiaintensiivinen prosessi, jonka energianominaiskulutus (EOK) on yleisesti 2–3.5 MWh/bdt. Noin 93 % energiasta kuluu jauhatuksessa jakautuen niin, että kaksi kolmasosaa kuluu päälinjan ja yksi kolmasosa rejektijauhatuksessa. Siksi myös tämän työn tavoite asetettiin vähentämään energian kulutusta juuri pää- ja rejektijauhatuksessa. Päälinjan jauhatuksessa tutkimuskohteiksi valittiin terityksen, tehojaon ja tuotantotason vaikutus EOK:een. Rejektijauhatuksen tehostamiseen pyrittiin yrittämällä vähentää rejektivirtaamaa painelajittelun keinoin. Koska TMP3 laitoksen jauhatuskapasiteettia on nostettu 25 %, tavoite oli nostaa päälinjan lajittelun kapasiteettia saman verran. Toisena tavoitteena oli pienentää rejektisuhdetta pää- ja rejektilajittelussa ja siten vähentää energiankulutusta rejektijauhatuksessa. Näitä tavoitteita lähestyttiin vaihtamalla päälinjan lajittimiin TamScreen-roottorit ja rejektilajittimiin Metso ProFoil-roottorit ja optimoimalla kuitufraktiot sihtirumpu- ja prosessiparametrimuutoksin. Syöttävällä terätyypillä pystyttiin vähentämään EOK:ta 100 kWh/bdt, mutta korkeampi jauhatusintensiteetti johti myös alempiin lujuusominaisuuksiin, korkeampaan ilmanläpäisyyn ja korkeampaan opasiteettiin. Myös tehojaolla voitiin vaikuttaa EOK:een. Kun ensimmäisen vaiheen jauhinta kuormitettiin enemmän, saavutettiin korkeimmillaan 70 kWh/bdt EOK-vähennys. Tuotantotason mittaamisongelmat heikensivät tuotantotasokoeajojen tuloksia siinä määrin, että näiden tulosten perusteella ei voida päätellä, onko EOK tuotantotasoriippuvainen vai ei. Päälinjan lajittelun kapasiteettia pystyttiin nostamaan TS-roottorilla vain 18 % jääden hieman tavoitetasosta. Rejektilajittelussa pystyttiin vähentämään rejektimäärää huomattavasti Metso ProFoil-roottorilla sekä sihtirumpu- ja prosessiparametrimuutoksin. Lajittamokehityksellä saavutettu EOK-vähennys arvioitiin massarejektisuhteen pienentymisen ja rejektijauhatuksessa käytetyn EOK:n avulla olevan noin 130 kWh/bdt. Yhteenvetona voidaan todeta, että tavoite 300 kWh/bdt EOK-vähennyksestä voidaan saavuttaa työssä käytetyillä tavoilla, mikäli niiden täysi potentiaali hyödynnetään tuotannossa.
Resumo:
The objective of this thesis is the development of a multibody dynamic model matching the observed movements of the lower limb of a skier performing the skating technique in cross-country style. During the construction of this model, the formulation of the equation of motion was made using the Euler - Lagrange approach with multipliers applied to a multibody system in three dimensions. The description of the lower limb of the skate skier and the ski was completed by employing three bodies, one representing the ski, and two representing the natural movements of the leg of the skier. The resultant system has 13 joint constraints due to the interconnection of the bodies, and four prescribed kinematic constraints to account for the movements of the leg, leaving the amount of degrees of freedom equal to one. The push-off force exerted by the skate skier was taken directly from measurements made on-site in the ski tunnel at the Vuokatti facilities (Finland) and was input into the model as a continuous function. Then, the resultant velocities and movement of the ski, center of mass of the skier, and variation of the skating angle were studied to understand the response of the model to the variation of important parameters of the skate technique. This allowed a comparison of the model results with the real movement of the skier. Further developments can be made to this model to better approximate the results to the real movement of the leg. One can achieve this by changing the constraints to include the behavior of the real leg joints and muscle actuation. As mentioned in the introduction of this thesis, a multibody dynamic model can be used to provide relevant information to ski designers and to obtain optimized results of the given variables, which athletes can use to improve their performance.
Resumo:
Prerequisites and effects of proactive and preventive psycho-social student welfare activities in Finnish preschool and elementary school were of interest in the present thesis. So far, Finnish student welfare work has mainly focused on interventions and individuals, and the voluminous possibilities to enhance well-being of all students as a part of everyday school work have not been fully exploited. Consequently, in this thesis three goals were set: (1) To present concrete examples of proactive and preventive psycho-social student welfare activities in Finnish basic education; (2) To investigate measurable positive effects of proactive and preventive activities; and (3) To investigate implementation of proactive and preventive activities in ecological contexts. Two prominent phenomena in preschool and elementary school years—transition to formal schooling and school bullying—were chosen as examples of critical situations that are appropriate targets for proactive and preventive psycho-social student welfare activities. Until lately, the procedures concerning both school transitions and school bullying have been rather problem-focused and reactive in nature. Theoretically, we lean on the bioecological model of development by Bronfenbrenner and Morris with concentric micro-, meso-, exo- and macrosystems. Data were drawn from two large-scale research projects, the longitudinal First Steps Study: Interactive Learning in the Child–Parent– Teacher Triangle, and the Evaluation Study of the National Antibullying Program KiVa. In Study I, we found that the academic skills of children from preschool–elementary school pairs that implemented several supportive activities during the preschool year developed more quickly from preschool to Grade 1 compared with the skills of children from pairs that used fewer practices. In Study II, we focused on possible effects of proactive and preventive actions on teachers and found that participation in the KiVa antibullying program influenced teachers‘ self-evaluated competence to tackle bullying. In Studies III and IV, we investigated factors that affect implementation rate of these proactive and preventive actions. In Study III, we found that principal‘s commitment and support for antibullying work has a clear-cut positive effect on implementation adherence of student lessons of the KiVa antibullying program. The more teachers experience support for and commitment to anti-bullying work from their principal, the more they report having covered KiVa student lessons and topics. In Study IV, we wanted to find out why some schools implement several useful and inexpensive transition practices, whereas other schools use only a few of them. We were interested in broadening the scope and looking at local-level (exosystem) qualities, and, in fact, the local-level activities and guidelines, along with teacherreported importance of the transition practices, were the only factors significantly associated with the implementation rate of transition practices between elementary schools and partner preschools. Teacher- and school-level factors available in this study turned out to be mostly not significant. To summarize, the results confirm that school-based promotion and prevention activities may have beneficial effects not only on students but also on teachers. Second, various top-down processes, such as engagement at the level of elementary school principals or local administration may enhance implementation of these beneficial activities. The main message is that when aiming to support the lives of children the primary focus should be on adults. In future, promotion of psychosocial well-being and the intrinsic value of inter- and intrapersonal skills need to be strengthened in the Finnish educational systems. Future research efforts in student welfare and school psychology, as well as focused training for psychologists in educational contexts, should be encouraged in the departments of psychology and education in Finnish universities. Moreover, a specific research centre for school health and well-being should be established.
Resumo:
The main objective of this work is to analyze the importance of the gas-solid interface transfer of the kinetic energy of the turbulent motion on the accuracy of prediction of the fluid dynamic of Circulating Fluidized Bed (CFB) reactors. CFB reactors are used in a variety of industrial applications related to combustion, incineration and catalytic cracking. In this work a two-dimensional fluid dynamic model for gas-particle flow has been used to compute the porosity, the pressure, and the velocity fields of both phases in 2-D axisymmetrical cylindrical co-ordinates. The fluid dynamic model is based on the two fluid model approach in which both phases are considered to be continuous and fully interpenetrating. CFB processes are essentially turbulent. The model of effective stress on each phase is that of a Newtonian fluid, where the effective gas viscosity was calculated from the standard k-epsilon turbulence model and the transport coefficients of the particulate phase were calculated from the kinetic theory of granular flow (KTGF). This work shows that the turbulence transfer between the phases is very important for a better representation of the fluid dynamics of CFB reactors, especially for systems with internal recirculation and high gradients of particle concentration. Two systems with different characteristics were analyzed. The results were compared with experimental data available in the literature. The results were obtained by using a computer code developed by the authors. The finite volume method with collocated grid, the hybrid interpolation scheme, the false time step strategy and SIMPLEC (Semi-Implicit Method for Pressure Linked Equations - Consistent) algorithm were used to obtain the numerical solution.
Resumo:
Wastes and side streams in the mining industry and different anthropogenic wastes often contain valuable metals in such concentrations their recovery may be economically viable. These raw materials are collectively called secondary raw materials. The recovery of metals from these materials is also environmentally favorable, since many of the metals, for example heavy metals, are hazardous to the environment. This has been noticed in legislative bodies, and strict regulations for handling both mining and anthropogenic wastes have been developed, mainly in the last decade. In the mining and metallurgy industry, important secondary raw materials include, for example, steelmaking dusts (recoverable metals e.g. Zn and Mo), zinc plant residues (Ag, Au, Ga, Ge, In) and waste slurry from Bayer process alumina production (Ga, REE, Ti, V). From anthropogenic wastes, waste electrical and electronic equipment (WEEE), among them LCD screens and fluorescent lamps, are clearly the most important from a metals recovery point of view. Metals that are commonly recovered from WEEE include, for example, Ag, Au, Cu, Pd and Pt. In LCD screens indium, and in fluorescent lamps, REEs, are possible target metals. Hydrometallurgical processing routes are highly suitable for the treatment of complex and/or low grade raw materials, as secondary raw materials often are. These solid or liquid raw materials often contain large amounts of base metals, for example. Thus, in order to recover valuable metals, with small concentrations, highly selective separation methods, such as hydrometallurgical routes, are needed. In addition, hydrometallurgical processes are also seen as more environmental friendly, and they have lower energy consumption, when compared to pyrometallurgical processes. In this thesis, solvent extraction and ion exchange are the most important hydrometallurgical separation methods studied. Solvent extraction is a mainstream unit operation in the metallurgical industry for all kinds of metals, but for ion exchange, practical applications are not as widespread. However, ion exchange is known to be particularly suitable for dilute feed solutions and complex separation tasks, which makes it a viable option, especially for processing secondary raw materials. Recovering valuable metals was studied with five different raw materials, which included liquid and solid side streams from metallurgical industries and WEEE. Recovery of high purity (99.7%) In, from LCD screens, was achieved by leaching with H2SO4, extracting In and Sn to D2EHPA, and selectively stripping In to HCl. In was also concentrated in the solvent extraction stage from 44 mg/L to 6.5 g/L. Ge was recovered as a side product from two different base metal process liquors with Nmethylglucamine functional chelating ion exchange resin (IRA-743). Based on equilibrium and dynamic modeling, a mechanism for this moderately complex adsorption process was suggested. Eu and Y were leached with high yields (91 and 83%) by 2 M H2SO4 from a fluorescent lamp precipitate of waste treatment plant. The waste also contained significant amounts of other REEs such as Gd and Tb, but these were not leached with common mineral acids in ambient conditions. Zn was selectively leached over Fe from steelmaking dusts with a controlled acidic leaching method, in which the pH did not go below, but was held close as possible to, 3. Mo was also present in the other studied dust, and was leached with pure water more effectively than with the acidic methods. Good yield and selectivity in the solvent extraction of Zn was achieved by D2EHPA. However, Fe needs to be eliminated in advance, either by the controlled leaching method or, for example, by precipitation. 100% Pure Mo/Cr product was achieved with quaternary ammonium salt (Aliquat 336) directly from the water leachate, without pH adjustment (pH 13.7). A Mo/Cr mixture was also obtained from H2SO4 leachates with hydroxyoxime LIX 84-I and trioctylamine (TOA), but the purities were 70% at most. However with Aliquat 336, again an over 99% pure mixture was obtained. High selectivity for Mo over Cr was not achieved with any of the studied reagents. Ag-NaCl solution was purified from divalent impurity metals by aminomethylphosphonium functional Lewatit TP-260 ion exchange resin. A novel preconditioning method, named controlled partial neutralization, with conjugate bases of weak organic acids, was used to control the pH in the column to avoid capacity losses or precipitations. Counter-current SMB was shown to be a better process configuration than either batch column operation or the cross-current operation conventionally used in the metallurgical industry. The raw materials used in this thesis were also evaluated from an economic point of view, and the precipitate from a waste fluorescent lamp treatment process was clearly shown to be the most promising.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
The heptapeptide angiotensin-(1-7) is considered to be a biologically active endproduct of the renin-angiotensin system. This angiotensin, which is devoid of the most known actions of angiotensin II such as induction of drinking behavior and vasoconstriction, has several selective effects in the brain and periphery. In the present article we briefly review recent evidence for a physiological role of angiotensin-(1-7) in the control of hydroelectrolyte balance
Resumo:
We investigated the effects of aerobic training on the efferent autonomic control of heart rate (HR) during dynamic exercise in middle-aged men, eight of whom underwent exercise training (T) while the other seven continued their sedentary (S) life style. The training was conducted over 10 months (three 1-h sessions/week on a field track at 70-85% of the peak HR). The contribution of sympathetic and parasympathetic exercise tachycardia was determined in terms of differences in the time constant effects on the HR response obtained using a discontinuous protocol (4-min tests at 25, 50, 100 and 125 watts on a cycle ergometer), and a continuous protocol (25 watts/min until exhaustion) allowed the quantification of the parameters (anaerobic threshold, VO2 AT; peak O2 uptake, VO2 peak; power peak) that reflect oxygen transport. The results obtained for the S and the T groups were: 1) a smaller resting HR in T (66 beats/min) when compared to S (84 beats/min); 2) during exercise, a small increase in the fast tachycardia (D0-10 s) related to vagal withdrawal (P<0.05, only at 25 watts) was observed in T at all powers; at middle and higher powers a significant decrease (P<0.05 at 50, 100 and 125 watts) in the slow tachycardia (D1-4 min) related to a sympathetic-dependent mechanism was observed in T; 3) the VO2 AT (S = 1.06 and T = 1.33 l/min) and VO2 peak (S = 1.97 and T = 2.47 l/min) were higher in T (P<0.05). These results demonstrate that aerobic training can induce significant physiological adaptations in middle-aged men, mainly expressed as a decrease in the sympathetic effects on heart rate associated with an increase in oxygen transport during dynamic exercise.
Resumo:
While traditional entrepreneurship literature addresses the pursuit of entrepreneurial opportunities to a solo entrepreneur, scholars increasingly agree that new ventures are often founded and operated by entrepreneurial teams as collective efforts especially in hightechnology industries. Researchers also suggest that team ventures are more likely to survive and succeed than ventures founded by the individual entrepreneur although specific challenges might relate to multiple individuals being involved in joint entrepreneurial action. In addition to new ventures, entrepreneurial teams are seen central for organizing work in established organizations since the teams are able to create major product and service innovations that drive organizational success. Acknowledgement of the entrepreneurial teams in various organizational contexts has challenged the notion on the individual entrepreneur. However, considering that entrepreneurial teams represent a collective-level phenomenon that bases on interactions between organizational members, entrepreneurial teams may not have been studied as indepth as could be expected from the point of view of the team-level, rather than the individual or the individuals in the team. Many entrepreneurial team studies adopt the individualized view of entrepreneurship and examine the team members’ aggregate characteristics or the role of a lead entrepreneur. The previous understandings might not offer a comprehensive and indepth enough understanding of collectiveness within entrepreneurial teams and team venture performance that often relates to the team-level issues in particular. In addition, as the collective-level of entrepreneurial teams has been approached in various ways in the existing literatures, the phenomenon has been difficult to understand in research and practice. Hence, there is a need to understand entrepreneurial teams at the collective-level through a systematic and comprehensive perspective. This study takes part in the discussions on entrepreneurial teams. The overall objective of this study is to offer a description and understanding of collectiveness within entrepreneurial teams beyond individual(s). The research questions of the study are: 1) what collectiveness within entrepreneurial teams stands for, what constitutes the basic elements of it, and who are included in it, 2) why, how, and when collectiveness emerges or reinforces within entrepreneurial teams, and 3) why collectiveness within entrepreneurial teams matters and how it could be developed or supported. In order to answer the above questions, this study bases on three approaches, two set of empirical data, two analysis techniques, and conceptual study. The first data set consists of 12 qualitative semi-structured interviews with business school students who are seen as prospective entrepreneurs. The data is approached through a social constructionist perspective and analyzed through discourse analysis. The second data set bases on a qualitative multiplecase study approach that aims at theory elaboration. The main data consists of 14 individual and four group semi-structured thematic interviews with members of core entrepreneurial teams of four team startups in high-technology industries. The secondary data includes publicly available documents. This data set is approached through a critical realist perspective and analyzed through systematic thematic analysis. The study is completed through a conceptual study that aims at building a theoretical model of collective-level entrepreneurship drawing from existing literatures on organizational theory and social-psychology. The theoretical work applies a positivist perspective. This study consists of two parts. The first part includes an overview that introduces the research background, knowledge gaps and objectives, research strategy, and key concepts. It also outlines the existing knowledge of entrepreneurial team literature, presents and justifies the choices of paradigms and methods, summarizes the publications, and synthesizes the findings through answering the above mentioned research questions. The second part consists of five publications that address independent research questions but all enable to answer the research questions set for this study as a whole. The findings of this study suggest a map of relevant concepts and their relationships that help grasp collectiveness within entrepreneurial teams. The analyses conducted in the publications suggest that collectiveness within entrepreneurial teams stands for cognitive and affective structures in-between team members including elements of collective entity, collective idea of business, collective effort, collective attitudes and motivations, and collective feelings. Collectiveness within entrepreneurial teams also stands for specific joint entrepreneurial action components in which the structures are constructed. The action components reflect equality and democracy, and open and direct communication in particular. Collectiveness emerges because it is a powerful tool for overcoming individualized barriers to entrepreneurship and due to collectively oriented desire for, collective value orientation to, demand for, and encouragement to team entrepreneurship. Collectiveness emerges and reinforces in processes of joint creation and realization of entrepreneurial opportunities including joint analysis and planning of the opportunities and strategies, decision-making and realization of the opportunities, and evaluation, feedback, and sanctions of entrepreneurial action. Collectiveness matters because it is relevant for potential future entrepreneurs and because it affects the ways collective ventures are initiated and managed. Collectiveness also matters because it is a versatile, dynamic, and malleable phenomenon and the ideas of it can be applied across organizational contexts that require team work in discovering or creating and realizing new opportunities. This study further discusses how the findings add to the existing knowledge of entrepreneurial team literature and how the ideas can be applied in educational, managerial, and policy contexts.
Resumo:
Part I: Ultra-trace determination of vanadium in lake sediments: a performance comparison using O2, N20, and NH3 as reaction gases in ICP-DRC-MS Thermal ion-molecule reactions, targeting removal of specific spectroscopic interference problems, have become a powerful tool for method development in quadrupole based inductively coupled plasma mass spectrometry (ICP-MS) applications. A study was conducted to develop an accurate method for the determination of vanadium in lake sediment samples by ICP-MS, coupled with a dynamic reaction cell (DRC), using two differenvchemical resolution strategies: a) direct removal of interfering C10+ and b) vanadium oxidation to VO+. The performance of three reaction gases that are suitable for handling vanadium interference in the dynamic reaction cell was systematically studied and evaluated: ammonia for C10+ removal and oxygen and nitrous oxide for oxidation. Although it was able to produce comparable results for vanadium to those using oxygen and nitrous oxide, NH3 did not completely eliminate a matrix effect, caused by the presence of chloride, and required large scale dilutions (and a concomitant increase in variance) when the sample and/or the digestion medium contained large amounts of chloride. Among the three candidate reaction gases at their optimized Eonditions, creation of VO+ with oxygen gas delivered the best analyte sensitivity and the lowest detection limit (2.7 ng L-1). Vanadium results obtained from fourteen lake sediment samples and a certified reference material (CRM031-040-1), using two different analytelinterference separation strategies, suggested that the vanadium mono-oxidation offers advantageous performance over the conventional method using NH3 for ultra-trace vanadium determination by ICP-DRC-MS and can be readily employed in relevant environmental chemistry applications that deal with ultra-trace contaminants.Part II: Validation of a modified oxidation approach for the quantification of total arsenic and selenium in complex environmental matrices Spectroscopic interference problems of arsenic and selenium in ICP-MS practices were investigated in detail. Preliminary literature review suggested that oxygen could serve as an effective candidate reaction gas for analysis of the two elements in dynamic reaction cell coupled ICP-MS. An accurate method was developed for the determination of As and Se in complex environmental samples, based on a series of modifications on an oxidation approach for As and Se previously reported. Rhodium was used as internal standard in this study to help minimize non-spectral interferences such as instrumental drift. Using an oxygen gas flow slightly higher than 0.5 mL min-I, arsenic is converted to 75 AS160+ ion in an efficient manner whereas a potentially interfering ion, 91Zr+, is completely removed. Instead of using the most abundant Se isotope, 80Se, selenium was determined by a second most abundant isotope, 78Se, in the form of 78Se160. Upon careful selection of oxygen gas flow rate and optimization ofRPq value, previous isobaric threats caused by Zr and Mo were reduced to background levels whereas another potential atomic isobar, 96Ru+, became completely harmless to the new selenium analyte. The new method underwent a strict validation procedure where the recovery of a suitable certified reference material was examined and the obtained sample data were compared with those produced by a credible external laboratory who analyzed the same set of samples using a standardized HG-ICP-AES method. The validation results were satisfactory. The resultant limits of detection for arsenic and selenium were 5 ng L-1 and 60 ng L-1, respectively.
Resumo:
The GARCH and Stochastic Volatility paradigms are often brought into conflict as two competitive views of the appropriate conditional variance concept : conditional variance given past values of the same series or conditional variance given a larger past information (including possibly unobservable state variables). The main thesis of this paper is that, since in general the econometrician has no idea about something like a structural level of disaggregation, a well-written volatility model should be specified in such a way that one is always allowed to reduce the information set without invalidating the model. To this respect, the debate between observable past information (in the GARCH spirit) versus unobservable conditioning information (in the state-space spirit) is irrelevant. In this paper, we stress a square-root autoregressive stochastic volatility (SR-SARV) model which remains true to the GARCH paradigm of ARMA dynamics for squared innovations but weakens the GARCH structure in order to obtain required robustness properties with respect to various kinds of aggregation. It is shown that the lack of robustness of the usual GARCH setting is due to two very restrictive assumptions : perfect linear correlation between squared innovations and conditional variance on the one hand and linear relationship between the conditional variance of the future conditional variance and the squared conditional variance on the other hand. By relaxing these assumptions, thanks to a state-space setting, we obtain aggregation results without renouncing to the conditional variance concept (and related leverage effects), as it is the case for the recently suggested weak GARCH model which gets aggregation results by replacing conditional expectations by linear projections on symmetric past innovations. Moreover, unlike the weak GARCH literature, we are able to define multivariate models, including higher order dynamics and risk premiums (in the spirit of GARCH (p,p) and GARCH in mean) and to derive conditional moment restrictions well suited for statistical inference. Finally, we are able to characterize the exact relationships between our SR-SARV models (including higher order dynamics, leverage effect and in-mean effect), usual GARCH models and continuous time stochastic volatility models, so that previous results about aggregation of weak GARCH and continuous time GARCH modeling can be recovered in our framework.