989 resultados para individual interest
Resumo:
Tutkimuksessa vertailtiin metsän erirakenteisuutta edistävien poimintahakkuiden ja pienaukkohakkuiden kannattavuutta metsänhoitosuositusten mukaiseen metsänkasvatukseen Keski-Suomessa. Poimintahakkuut ja pienaukkohakkuut ovat menetelmiä, joilla voidaan lisätä luonnonmetsän häiriödynamiikan mukaista pienipiirteistä elinympäristöjen vaihtelua ja siksi ne sopivat etenkin erityiskohteisiin monimuotoisuuden, maiseman tai metsien monikäytön vuoksi. Ne johtavat yleensä vähitellen eri-ikäisrakenteiseen metsään, jossa puuston läpimittaluokkajakauma muistuttaa käänteistä J-kirjainta. Eri-ikäisrakenteisen metsänkäsittelyn taloudellista kannattavuutta puoltavat uudistumiskustannusten poisjäänti ja tukkipuihin painottuvat säännöllisin väliajoin toteutuvat hakkuut. Menetelmän soveltumista Suomen olosuhteisiin pidetään kuitenkin epävarmana. Tässä tutkimuksessa tarkasteltiin tasaikäisrakenteisen metsän muuttamista eri-ikäisrakenteiseksi 40 vuoden siirtymäaikana Metsähallituksen hallinnoimassa Isojäven ympäristöarvometsässä Kuhmoisissa. Tutkimusaineisto koostui 405 kuusivaltaisesta tasaikäisestä kuviosta, joiden pinta-alasta metsämaata on 636 hehtaaria. Metsän kehitystä simuloitiin puutason kasvumalleja käyttäen ja käsittelytoimenpiteet simuloitiin viisivuotiskausittain SIMO-metsäsuunnitteluohjelmistolla. Simulointien avulla selvitettiin jokaisen käsittelyskenaarion hakkuumäärät puutavaralajeittain, diskontatut kassavirrat ja puustopääoman muutos tarkasteluajanjakson aikana. Puunkorjuun yksikkökustannusten laskennan apuna käytettiin automatisoitua seurantajärjestelmää, jossa metsäkoneisiin asennettuilla matkapuhelimilla kerättiin MobiDoc2-sovelluksella metsäkoneiden käytöstä kiihtyvyystiedot, GPS-paikkatiedot ja syötetiedot. Lopulta jokaiselle käsittelyskenaariolle laskettiin metsän puuntuotannollista arvoa kuvaavalla tuottoarvon yhtälöllä nettonykyarvot, josta vähennettiin diskontatut puunkorjuun kustannukset. Tutkimuksen tulosten mukaan poimintahakkuun NPV oli 3 prosentin korkokannalla noin 91 % (7420 €/ha) ja pienaukkohakkuiden noin 99 % (8076 €/ha) metsänhoitosuositusten mukaisesta käsittelystä (8176 €/ha). Komparatiivinen statiikka osoitti, että korkokannan kasvattaminen 5 prosenttiin ei olennaisesti lisännyt nettonykyarvojen eroja. Poimintahakkuiden puunkorjuun yksikkökustannukset olivat 0,8 €/m3 harvennushakkuita pienemmät ja 7,2 €/m3 uudistushakkuita suuremmat. Pienaukkohakkuiden yksikkökustannukset olivat 0,7 €/m3 uudistushakkuita suuremmat.Tulosten perusteella on väistämätöntä että siirtymävaihe tasaikäisrakenteisesta eri-ikäisrakenteiseksi metsäksi aiheuttaa taloudellisia tappioita siitäkin huolimatta, että hakkuut ovat voimakkaita ja tehdään varttuneeseen kasvatusmetsään. Tappion määrä on metsän peitteisyyden ylläpidosta aiheutuva vaihtoehtoiskustannus.
Resumo:
The modern subject is what we can call a self-subjecting individual. This is someone in whose inner reality has been implanted a more permanent governability, a governability that works inside the agent. Michel Foucault s genealogy of the modern subject is the history of its constitution by power practices. By a flight of imagination, suppose that this history is not an evolving social structure or cultural phenomenon, but one of those insects (moth) whose life cycle consists of three stages or moments: crawling larva, encapsulated pupa, and flying adult. Foucault s history of power-practices presents the same kind of miracle of total metamorphosis. The main forces in the general field of power can be apprehended through a generalisation of three rationalities functioning side-by-side in the plurality of different practices of power: domination, normalisation and the law. Domination is a force functioning by the rationality of reason of state: the state s essence is power, power is firm domination over people, and people are the state s resource by which the state s strength is measured. Normalisation is a force that takes hold on people from the inside of society: it imposes society s own reality its empirical verity as a norm on people through silently working jurisdictional operations that exclude pathological individuals too far from the average of the population as a whole. The law is a counterforce to both domination and normalisation. Accounting for elements of legal practice as omnihistorical is not possible without a view of the general field of power. Without this view, and only in terms of the operations and tactical manoeuvres of the practice of law, nothing of the kind can be seen: the only thing that practice manifests is constant change itself. However, the backdrop of law s tacit dimension that is, the power-relations between law, domination and normalisation allows one to see more. In the general field of power, the function of law is exactly to maintain the constant possibility of change. Whereas domination and normalisation would stabilise society, the law makes it move. The European individual has a reality as a problem. What is a problem? A problem is something that allows entry into the field of thought, said Foucault. To be a problem, it is necessary for certain number of factors to have made it uncertain, to have made it lose familiarity, or to have provoked a certain number of difficulties around it . Entering the field of thought through problematisations of the European individual human forms, power and knowledge one is able to glimpse the historical backgrounds of our present being. These were produced, and then again buried, in intersections between practices of power and games of truth. In the problem of the European individual one has suitable circumstances that bring to light forces that have passed through the individual through centuries.
Resumo:
The crystal structures of a number of globular proteins are currently available. An analysis of the distribution of side-chains among different allowed conformations in these proteins has been carried out. The observed conformations of individual residues are discussed on the basis of well-known stereochemical criteria. The population distribution of side-chains in different allowed regions in conformational space can be explained largely on the basis of simple steric considerations. In addition to examining the conformational behaviour of individual residues, some population distributions of conformational angles of general interest involving groups of residues have also been analyzed.
Resumo:
The main aim of the present study was to develop information and communication technology (ICT) based chemistry education. The goals for the study were to support meaningful chemistry learning, research-based teaching and diffusion of ICT innovations. These goals were used as guidelines that form the theoretical framework for this study. This Doctoral Dissertation is based on eight-stage research project that included three design researches. These three design researches were scrutinized as separate case studies in which the different cases were formed according to different design teams: i) one researcher was in charge of the design and teachers were involved in the research process, ii) a research group was in charge of the design and students were involved in the research process, and iii) the design was done by student teams, the research was done collaboratively, and the design process was coordinated by a researcher. The research projects were conducted using mixed method approach, which enabled a comprehensive view on education design. In addition, the three central areas of design research: problem analysis, design solution and design process were included in the research, which was guided by the main research questions formed according to these central areas: 1) design solution: what kind of elements are included in ICT-based learning environments that support meaningful chemistry learning and diffusion of innovation, 2) problem analysis: what kind of new possibilities the designed learning environments offer for the support of meaningful chemistry learning, and 3) design process: what kind of opportunities and challenges does collaboration bring to the design of ICT-based learning environments? The main research questions were answered according to the analysis of the survey and observation data, six designed learning environments and ten design narratives from the three case studies. Altogether 139 chemistry teachers and teacher students were involved in the design processes. The data was mainly analysed by methods of qualitative content analysis. The first main result from the study give new information on the meaningful chemistry learning and the elements of ICT-based learning environment that support the diffusion of innovation, which can help in the development of future ICT-education design. When the designed learning environment was examined in the context of chemistry education, it was evident that an ICT-based chemistry learning environment supporting the meaningful learning of chemistry motivates the students and makes the teacher s work easier. In addition, it should enable the simultaneous fulfilment of several pedagogical goals and activate higher-level cognitive processes. The learning environment supporting the diffusion of ICT innovation is suitable for Finnish school environment, based on open source code, and easy to use with quality chemistry content. According to the second main result, new information was acquired about the possibilities of ICT-based learning environments in supporting meaningful chemistry learning. This will help in setting the goals for future ICT education. After the analysis of design solutions and their evaluations, it can be said that ICT enables the recognition of all elements that define learning environments (i.e. didactic, physical, technological and social elements). The research particularly demonstrates the significance of ICT in supporting students motivation and higher-level cognitive processes as well as versatile visualization resources for chemistry that ICT makes possible. In addition, research-based teaching method supports well the diffusion of studied innovation on individual level. The third main result brought out new information on the significance of collaboration in design research, which guides the design of ICT education development. According to the analysis of design narratives, it can be said that collaboration is important in the execution of scientifically reliable design research. It enables comprehensive requirement analysis and multifaceted development, which improves the reliability and validity of the research. At the same time, it sets reliability challenges by complicating documenting and coordination, for example. In addition, a new method for design research was developed. Its aim is to support the execution of complicated collaborative design projects. To increase the reliability and validity of the research, a model theory was used. It enables time-pound documenting and visualization of design decisions that clarify the process. This improves the reliability of the research. The validity of the research is improved by requirement definition through models. This way learning environments that meet the design goals can be constructed. The designed method can be used in education development from comprehensive to higher level. It can be used to recognize the needs of different interest groups and individuals with regard to processes, technology and substance knowledge as well as interfaces and relations between them. The developed method has also commercial potential. It is used to design learning environments for national and international market.
Resumo:
According to the most prevalent view, there are 3-4 fixed "slots" in visual working memory for temporary storage. Recently this view has been challenged with a theory of dynamic resources which are restricted in their totality but can be freely allocated. The aim of this study is to clarify which one of the theories better describes the performance in visual working memory tasks with contour shapes. Thus in this study, the interest is in both the number of recalled stimuli and the precision of the memory representations. Stimuli in the experiments were radial frequency patterns, which were constructed by sinusoidally modulating the radius of a circle. Five observers participated in the experiment and it consisted of two different tasks. In the delayed discrimination task the number of recalled stimuli was measured with 2-interval forced choice task. Observer was shown serially two displays with 1, 5 s ISI (inter stimulus interval). Displays contained 1-6 patterns and they differed from each other with changed amplitude in one pattern. The participant s task was to report whether the changed pattern had higher amplitude in the first or in the second interval. The amount of amplitude change was defined with QUEST-procedure and the 75 % discrimination threshold was measured in the task. In the recall task the precision of the memory representations was measured with subjective adjustment method. First, observer was shown 1-6 patterns and after 1, 5 s ISI one location of the previously shown pattern was cued. Observer s task was to adjust amplitude of a probe pattern to match the amplitude of the pattern in working memory. In the delayed discrimination task the performance of all observes declined smoothly when the number of presented patterns was increased. The result supports the resource theory of working memory as there was no sudden fall in the performance. The amplitude threshold for one item was 0.01 0.05 and as the number of items increased from 1 to 6 there was a 4 15 -fold linear increase in the amplitude threshold (0.14 0.29). In the recall adjustment task the precision of four observers performance declined smoothly as the number of presented patterns was increased. The result also supports the resource theory. The standard deviation for one item was 0.03 0.05 and as the number of items increased from 1 to 6 there was a 2 3 -fold linear increase in the amplitude threshold (0.06 0.11). These findings show that the performance in a visual working memory task is described better according to the theory of freely allocated resources and not to the traditional slot-model. In addition, the allocation of the resources depends on the properties of the individual observer and the visual working memory task.
Resumo:
Ab initio molecular orbital (MO) calculations with the 3-21G and 6-31G basis sets were performed on a series of ion-molecule and ion pair-molecule complexes for the H2O + LiCN system. Stabilisation energies (with counter-poise corrections), geometrical parameters, internal force constants and harmonic vibrational frequencies were evaluated for 16 structures of interest. Although the interaction energies are smaller, the geometries and relative stabilities of the monohydrated contact ion pair are reminiscent of those computed for the complexes of the individual ions. Thus, interaction of the oxygen lone pair with lithium leads to a highly stabilised C2v structure, while the coordination of water to the cyanide ion involves a slightly non-linear hydrogen bond. Symmetrical bifurcated structures are computed to be saddle points on the potential energy surface, and to have an imaginary frequency for the rocking mode of the water molecule. On optimisation the geometries of the solvent shared ion pair structures (e.g. Li+cdots, three dots, centered OH2cdots, three dots, centered CN−) revealed a proton transfer from the water molecule leading to hydrogen bonded forms such as Li-O-Hcdots, three dots, centered HCN. The variation in the force constants and harmonic frequencies in the various structures considered are discussed in terms of ion-molecular and ion pair-molecule interactions.
Resumo:
The study analyses the ambivalent relationship republicanism, as a form of self-government free from domination, had with the ideal of participatory oratory and non-dominated speech on the one hand, and with the danger of unhindered demagogy and its possibly fatal consequences to that form of government on the other. Although previous scholarship has delved deeply into republicanism as well as into rhetoric and public speech, the interplay between those aspects has only gathered scattered interest, and there has been no systematic study considering the variety of republican approaches to rhetoric and public speech in 17th-century England. The rare attempts to do so have been studies in English literature, and they have not analysed the political philosophy of republicanism, as the focus has been on republicanism as a literary culture. This study connects the fields of political theory, political history as well as literature in order to make a multidisciplinary contribution to intellectual history. The study shows that, within the tradition of classical republicanism, individual authors could make different choices when addressing the problematic topics of public speech and rhetoric, and the variety of their conclusions often set the authors against each other, resulting in the development of their theories through internal debates within the republican tradition. The authors under study were chosen to reflect this variety and the connections between them: the similarities between James Harrington and John Streater, and between John Milton and John Hall of Durham are shown, as well the controversies between Harrington and Milton, and Streater and Hall, respectively. In addition, by analysing the writings of Marchamont Nedham the study will show that the choices were not limited to more, or less, democratic brands of republicanism. Most significantly, the study provides a thorough analysis of the political philosophies behind the various brands of republicanism, in addition to describing them. By means of this analysis, the study shows that previous attempts to assess the role of free speech and public debate, through the lenses of modern, rights-based liberal political theory have resulted in an inappropriate framework for understanding early modern English republicanism. By approaching the topics through concepts used by the republicans legitimate authority, leadership by oratory, and republican freedom and through the frames of reference available and familiar to them roles of education and institutions the study presents a thorough and systematic analysis of the role and function of rhetoric and public speech in English republicanism. The findings of this analysis have significant consequences to our current understanding of the history and development of republican political theory, and, more generally, of the connections between democratic theory and free speech.
Resumo:
Ab initio MO calculations are performed on a series of ion-molecular and ion pair-molecular complexes of H2O + MX (MX = LiF, LiCl, NaCl, BeO and MgO) systems. BSSE-corrected stabilization energies, optimized geometrical parameters, internal force constants and harmonic vibrational frequencies have been evaluated for all the structures of interest. The trends observed in the geometrical parameters and other properties calculated for the mono-hydrated contact ion pair complexes parallel those computed for the complexes of the individual ions. The bifurcated structures are found to be saddle points with an imaginary frequency corresponding to the rocking mode of water molecules. The solvent-shared ion pair complexes have high interaction energies. Trends in the internal force constant and harmonic frequency values are discussed in terms of ion-molecular and ion-pair molecular interactions.
Resumo:
In social selection the phenotype of an individual depends on its own genotype as well as on the phenotypes, and so genotypes, of other individuals. This makes it impossible to associate an invariant phenotype with a genotype: the social context is crucial. Descriptions of metazoan development, which often is viewed as the acme of cooperative social behaviour, ignore or downplay this fact. The implicit justification for doing so is based on a group-selectionist point of view. Namely, embryos are clones, therefore all cells have the same evolutionary interest, and the visible differences between cells result from a common strategy. The reasoning is flawed, because phenotypic heterogeneity within groups can result from contingent choices made by cells from a flexible repertoire as in multicellular development. What makes that possible is phenotypic plasticity, namely the ability of a genotype to exhibit different phenotypes. However, co-operative social behaviour with division of labour requires that different phenotypes interact appropriately, not that they belong to the same genotype, or have overlapping genetic interests. We sketch a possible route to the evolution of social groups that involves many steps: (a) individuals that happen to be in spatial proximity benefit simply by virtue of their number; (b) traits that are already present act as preadaptations and improve the efficiency of the group; and (c) new adaptations evolve under selection in the social context-that is, via interactions between individuals-and further strengthen group behaviour. The Dictyostelid or cellular slime mould amoebae (CSMs) become multicellular in an unusual way, by the aggregation of free-living cells. In nature the resulting group can be genetically homogeneous (clonal) or heterogeneous (polyclonal); in either case its development, which displays strong cooperation between cells (to the extent of so-called altruism) is not affected. This makes the CSMs exemplars for the study of social behaviour.
Resumo:
Multisensor recordings are becoming commonplace. When studying functional connectivity between different brain areas using such recordings, one defines regions of interest, and each region of interest is often characterized by a set (block) of time series. Presently, for two such regions, the interdependence is typically computed by estimating the ordinary coherence for each pair of individual time series and then summing or averaging the results over all such pairs of channels (one from block 1 and other from block 2). The aim of this paper is to generalize the concept of coherence so that it can be computed for two blocks of non-overlapping time series. This quantity, called block coherence, is first shown mathematically to have properties similar to that of ordinary coherence, and then applied to analyze local field potential recordings from a monkey performing a visuomotor task. It is found that an increase in block coherence between the channels from V4 region and the channels from prefrontal region in beta band leads to a decrease in response time.
Resumo:
Conjugated polymers are intensively pursued as candidate materials for emission and detection devices with the optical range of interest determined by the chemical structure. On the other hand the optical range for emission and detection can also be tuned by size selection in semiconductor nanoclusters. The mechanisms for charge generation and separation upon optical excitation, and light emission are different for these systems. Hybrid systems based on these different class of materials reveal interesting electronic and optical properties and add further insight into the individual characteristics of the different components. Multilayer structures and blends of these materials on different substrates were prepared for absorption, photocurrent (Iph), photoluminescence (PL) and electroluminscence (EL) studies. Polymers chosen were derivatives of polythiophene (PT) and polyparaphenylenevinylene (PPV) along with nanoclusters of cadmium sulphide of average size 4.4 nm (CdS-44). The photocurrent spectral response in these systems followed the absorption response around the band edges for each of the components and revealed additional features, which depended on bias voltage, thickness of the layers and interfacial effects. The current-voltage curves showed multi-component features with emission varying for different regimes of voltage. The emission spectral response revealed additive features and is discussed in terms of excitonic mechanisms.
Resumo:
Sub-pixel classification is essential for the successful description of many land cover (LC) features with spatial resolution less than the size of the image pixels. A commonly used approach for sub-pixel classification is linear mixture models (LMM). Even though, LMM have shown acceptable results, pragmatically, linear mixtures do not exist. A non-linear mixture model, therefore, may better describe the resultant mixture spectra for endmember (pure pixel) distribution. In this paper, we propose a new methodology for inferring LC fractions by a process called automatic linear-nonlinear mixture model (AL-NLMM). AL-NLMM is a three step process where the endmembers are first derived from an automated algorithm. These endmembers are used by the LMM in the second step that provides abundance estimation in a linear fashion. Finally, the abundance values along with the training samples representing the actual proportions are fed to multi-layer perceptron (MLP) architecture as input to train the neurons which further refines the abundance estimates to account for the non-linear nature of the mixing classes of interest. AL-NLMM is validated on computer simulated hyperspectral data of 200 bands. Validation of the output showed overall RMSE of 0.0089±0.0022 with LMM and 0.0030±0.0001 with the MLP based AL-NLMM, when compared to actual class proportions indicating that individual class abundances obtained from AL-NLMM are very close to the real observations.