934 resultados para Object-Specific Authorization Protocol
Resumo:
Lipid analysis is commonly performed by gas chromatography (GC) in laboratory conditions. Spectroscopic techniques, however, are non-destructive and can be implemented noninvasively in vivo. Excess fat (triglycerides) in visceral adipose tissue and liver is known predispose to metabolic abnormalities, collectively known as the metabolic syndrome. Insulin resistance is the likely cause with diets high in saturated fat known to impair insulin sensitivity. Tissue triglyceride composition has been used as marker of dietary intake but it can also be influenced by tissue specific handling of fatty acids. Recent studies have shown that adipocyte insulin sensitivity correlates positively with their saturated fat content, contradicting the common view of dietary effects. A better understanding of factors affecting tissue triglyceride composition is needed to provide further insights into tissue function in lipid metabolism. In this thesis two spectroscopic techniques were developed for in vitro and in vivo analysis of tissue triglyceride composition. In vitro studies (Study I) used infrared spectroscopy (FTIR), a fast and cost effective analytical technique well suited for multivariate analysis. Infrared spectra are characterized by peak overlap leading to poorly resolved absorbances and limited analytical performance. In vivo studies (Studies II, III and IV) used proton magnetic resonance spectroscopy (1H-MRS), an established non-invasive clinical method for measuring metabolites in vivo. 1H-MRS has been limited in its ability to analyze triglyceride composition due to poorly resolved resonances. Using an attenuated total reflection accessory, we were able to obtain pure triglyceride infrared spectra from adipose tissue biopsies. Using multivariate curve resolution (MCR), we were able to resolve the overlapping double bond absorbances of monounsaturated fat and polyunsaturated fat. MCR also resolved the isolated trans double bond and conjugated linoleic acids from an overlapping background absorbance. Using oil phantoms to study the effects of different fatty acid compositions on the echo time behaviour of triglycerides, it was concluded that the use of long echo times improved peak separation with T2 weighting having a negligible impact. It was also discovered that the echo time behaviour of the methyl resonance of omega-3 fats differed from other fats due to characteristic J-coupling. This novel insight could be used to detect omega-3 fats in human adipose tissue in vivo at very long echo times (TE = 470 and 540 ms). A comparison of 1H-MRS of adipose tissue in vivo and GC of adipose tissue biopsies in humans showed that long TE spectra resulted in improved peak fitting and better correlations with GC data. The study also showed that calculation of fatty acid fractions from 1H-MRS data is unreliable and should not be used. Omega-3 fatty acid content derived from long TE in vivo spectra (TE = 540 ms) correlated with total omega-3 fatty acid concentration measured by GC. The long TE protocol used for adipose tissue studies was subsequently extended to the analysis of liver fat composition. Respiratory triggering and long TE resulted in spectra with the olefinic and tissue water resonances resolved. Conversion of the derived unsaturation to double bond content per fatty acid showed that the results were in accordance with previously published gas chromatography data on liver fat composition. In patients with metabolic syndrome, liver fat was found to be more saturated than subcutaneous or visceral adipose tissue. The higher saturation observed in liver fat may be a result of a higher rate of de-novo-lipogenesis in liver than in adipose tissue. This thesis has introduced the first non-invasive method for determining adipose tissue omega-3 fatty acid content in humans in vivo. The methods introduced here have also shown that liver fat is more saturated than adipose tissue fat.
Resumo:
Higher education is faced with the challenge of strengthening students competencies for the constantly evolving technology-mediated practices of knowledge work. The knowledge creation approach to learning (Paavola et al., 2004; Hakkarainen et al., 2004) provides a theoretical tool to address learning and teaching organized around complex problems and the development of shared knowledge objects, such as reports, products, and new practices. As in professional work practices, it appears necessary to design sufficient open-endedness and complexity for students teamwork in order to generate unpredictable and both practically and epistemologically challenging situations. The studies of the thesis examine what kinds of practices are observed when student teams engage in knowledge creating inquiry processes, how the students themselves perceive the process, and how to facilitate inquiry with technology-mediation, tutoring, and pedagogical models. Overall, 20 student teams collaboration processes and productions were investigated in detail. This collaboration took place in teams or small groups of 3-6 students from multiple domain backgrounds. Two pedagogical models were employed to provide heuristic guidance for the inquiry processes: the progressive inquiry model and the distributed project model. Design-based research methodology was employed in combination with case study as the research design. Database materials from the courses virtual learning environment constituted the main body of data, with additional data from students self-reflections and student and teacher interviews. Study I examined the role of technology mediation and tutoring in directing students knowledge production in a progressive inquiry process. The research investigated how the scale of scaffolding related to the nature of knowledge produced and the deepening of the question explanation process. In Study II, the metaskills of knowledge-creating inquiry were explored as a challenge for higher education: metaskills refers to the individual, collective, and object-centered aspects of monitoring collaborative inquiry. Study III examined the design of two courses and how the elaboration of shared objects unfolded based on the two pedagogical models. Study IV examined how the arranged concept-development project for external customers promoted practices of distributed, partially virtual, project work, and how the students coped with the knowledge creation challenge. Overall, important indicators of knowledge creating inquiry were the following: new versions of knowledge objects and artifacts demonstrated a deepening inquiry process; and the various productions were co-created through iterations of negotiations, drafting, and versioning by the team members. Students faced challenges of establishing a collective commitment, devising practices to co-author and advance their reports, dealing with confusion, and managing culturally diverse teams. The progressive inquiry model, together with tutoring and technology, facilitated asking questions, generating explanations, and refocusing lines of inquiry. The involvement of the customers was observed to provide a strong motivation for the teams. On the evidence, providing team-specific guidance, exposing students to models of scientific argumentation and expert work practices, and furnishing templates for the intended products appear to be fruitful ways to enhance inquiry processes. At the institutional level, educators do well to explore ways of developing collaboration with external customers, public organizations or companies, and between educational units in order to enhance educational practices of knowledge creating inquiry.
Resumo:
The dispersion relations, frequency distribution function and specific heat of zinc blende have been calculated using Houston's method on (1) A short range force (S. R.) model of the type employed in diamond by Smith and (2) A long range model assuming an effective charge Ze on the ions. Since the elastic constant data on ZnS are not in agreement with one another the following values were used in these calculations: {Mathematical expression}. As compared to the results on the S. R. model, the Coulomb force causes 1. A splitting of the optical branches at (000) and a larger dispersion of these branches; 2. A rise in the acoustic frequency branches the effect being predominant in a transverse acoustic branch along [110]; 3. A bridging of the gap of forbidden frequencies in the S. R. model; 4. A reduction of the moments of the frequency distribution function and 5. A flattening of the Θ- T curve. By plotting (Θ/Θ0) vs. T., the experimental data of Martin and Clusius and Harteck are found to be in perfect coincidence with the curve for the short range model. The values of the elastic constants deduced from the ratio Θ0 (Theor)/Θ0 (Expt) agree with those of Prince and Wooster. This is surprising as several lines of evidence indicate that the bond in zinc blende is partly covalent and partly ionic. The conclusion is inescapable that the effective charge in ZnS is a function of the wave vector {Mathematical expression}.
Resumo:
The TCP transcription factors control multiple developmental traits in diverse plant species. Members of this family share an similar to 60-residue-long TCP domain that binds to DNA. The TCP domain is predicted to form a basic helix-loop-helix ( bHLH) structure but shares little sequence similarity with canonical bHLH domain. This classifies the TCP domain as a novel class of DNA binding domain specific to the plant kingdom. Little is known about how the TCP domain interacts with its target DNA. We report biochemical characterization and DNA binding properties of a TCP member in Arabidopsis thaliana, TCP4. We have shown that the 58-residue domain of TCP4 is essential and sufficient for binding to DNA and possesses DNA binding parameters comparable to canonical bHLH proteins. Using a yeast-based random mutagenesis screen and site-directed mutants, we identified the residues important for DNA binding and dimer formation. Mutants defective in binding and dimerization failed to rescue the phenotype of an Arabidopsis line lacking the endogenous TCP4 activity. By combining structure prediction, functional characterization of the mutants, and molecular modeling, we suggest a possible DNA binding mechanism for this class of transcription factors.
Resumo:
Despite the central role of legitimacy in social and organizational life, we know little of the subtle meaning-making processes through which organizational phenomena, such as industrial restructuring, are legitimated in contemporary society. Therefore, this paper examines the discursive legitimation strategies used when making sense of global industrial restructuring in the media. Based on a critical discourse analysis of extensive media coverage of a revolutionary pulp and paper sector merger, we distinguish and analyze five legitimation strategies: (1) normalization, (2) authorization, (3) rationalization, (4) moralization, and (5) narrativization. We argue that while these specific legitimation strategies appear in individual texts, their recurring use in the intertextual totality of the public discussion establishes the core elements of the emerging legitimating discourse.
Resumo:
Most human ACTA1 skeletal actin gene mutations cause dominant, congenital myopathies often with severely reduced muscle function and neonatal mortality. High sequence conservation of actin means many mutated ACTA1 residues are identical to those in the Drosophila Act88F, an indirect flight muscle specific sarcomeric actin. Four known Act88F mutations occur at the same actin residues mutated in ten ACTA1 nemaline mutations, A138D/P, R256H/L, G268C/D/R/S and R372C/S. These Act88F mutants were examined for similar muscle phenotypes. Mutant homozygotes show phenotypes ranging from a lack of myofibrils to almost normal sarcomeres at eclosion. Aberrant Z-disc-like structures and serial Z-disc arrays, ‘zebra bodies’, are observed in homozygotes and heterozygotes of all four Act88F mutants. These electron-dense structures show homologies to human nemaline bodies/rods, but are much smaller than those typically found in the human myopathy. We conclude that the Drosophila indirect flight muscles provide a good model system for studying ACTA1 mutations.
Resumo:
Market microstructure is “the study of the trading mechanisms used for financial securities” (Hasbrouck (2007)). It seeks to understand the sources of value and reasons for trade, in a setting with different types of traders, and different private and public information sets. The actual mechanisms of trade are a continually changing object of study. These include continuous markets, auctions, limit order books, dealer markets, or combinations of these operating as a hybrid market. Microstructure also has to allow for the possibility of multiple prices. At any given time an investor may be faced with a multitude of different prices, depending on whether he or she is buying or selling, the quantity he or she wishes to trade, and the required speed for the trade. The price may also depend on the relationship that the trader has with potential counterparties. In this research, I touch upon all of the above issues. I do this by studying three specific areas, all of which have both practical and policy implications. First, I study the role of information in trading and pricing securities in markets with a heterogeneous population of traders, some of whom are informed and some not, and who trade for different private or public reasons. Second, I study the price discovery of stocks in a setting where they are simultaneously traded in more than one market. Third, I make a contribution to the ongoing discussion about market design, i.e. the question of which trading systems and ways of organizing trading are most efficient. A common characteristic throughout my thesis is the use of high frequency datasets, i.e. tick data. These datasets include all trades and quotes in a given security, rather than just the daily closing prices, as in traditional asset pricing literature. This thesis consists of four separate essays. In the first essay I study price discovery for European companies cross-listed in the United States. I also study explanatory variables for differences in price discovery. In my second essay I contribute to earlier research on two issues of broad interest in market microstructure: market transparency and informed trading. I examine the effects of a change to an anonymous market at the OMX Helsinki Stock Exchange. I broaden my focus slightly in the third essay, to include releases of macroeconomic data in the United States. I analyze the effect of these releases on European cross-listed stocks. The fourth and last essay examines the uses of standard methodologies of price discovery analysis in a novel way. Specifically, I study price discovery within one market, between local and foreign traders.
Resumo:
Several researchers are of the opinion that there are many benefits in using the object-oriented paradigm in information systems development. If the object-oriented paradigm is used, the development of information systems may, for example, be faster and more efficient. On the other hand, there are also several problems with the paradigm. For example, it is often considered complex, it is often difficult to make use of the reuse concept and it is still immature in some areas. Although there are several interesting features in the object-oriented paradigm, there is still little comprehensive knowledge of the benefits and problems associated with it. The objective of the following study was to investigate and to gain more understanding of the benefits and problems of the object-oriented paradigm. A review of previous studies was made and twelve benefits and twelve problems were established. These benefits and problems were then analysed, studied and discussed. Further a survey and some case studies were made in order to get some knowledge on what benefits and problems with the object-oriented paradigm Finnish software companies had experienced. One hundred and four companies answered the survey that was sent to all Finnish software companies with five or more employees. The case studies were made with six large Finnish software companies. The major finding was that Finnish software companies were exceptionally positive towards the object-oriented information systems development and had experienced very few of the proposed problems. Finally two models for further research were developed. The first model presents connections between benefits and the second between problems.
Resumo:
A study has been carried out on the non-specific interference due to serum in the avidin biotin micro-ELISA for monkey chorionic gonadotropin. Results suggest that it is not due to any proteolytic activity in the serum, but immunoglobulin or associated factors interfering at the level of antigen-antibody interaction. This interference was eliminated by heating samples at 60°C for 30 min.
Resumo:
On the one hand this thesis attempts to develop and empirically test an ethically defensible theorization of the relationship between human resource management (HRM) and competitive advantage. The specific empirical evidence indicates that at least part of HRM's causal influence on employee performance may operate indirectly through a social architecture and then through psychological empowerment. However, in particular the evidence concerning a potential influence of HRM on organizational performance seems to put in question some of the rhetorics within the HRM research community. On the other hand, the thesis tries to explicate and defend a certain attitude towards the philosophically oriented debates within organization science. This involves suggestions as to how we should understand meaning, reference, truth, justification and knowledge. In this understanding it is not fruitful to see either the problems or the solutions to the problems of empirical social science as fundamentally philosophical ones. It is argued that the notorious problems of social science, in this thesis exemplified by research on HRM, can be seen as related to dynamic complexity in combination with both the ethical and pragmatic difficulty of ”laboratory-like-experiments”. Solutions … can only be sought by informed trials and errors depending on the perceived familiarity with the object(s) of research. The odds are against anybody who hopes for clearly adequate social scientific answers to more complex questions. Social science is in particular unlikely to arrive at largely accepted knowledge of the kind ”if we do this, then that will happen”, or even ”if we do this, then that is likely to happen”. One of the problems probably facing most of the social scientific research communities is to specify and agree upon the ”this ” and the ”that” and provide convincing evidence of how they are (causally) related. On most more complex questions the role of social science seems largely to remain that of contributing to a (critical) conversation, rather than to arrive at more generally accepted knowledge. This is ultimately what is both argued and, in a sense, demonstrated using research on the relationship between HRM and organizational performance as an example.
Resumo:
In this two-part series of papers, a generalized non-orthogonal amplify and forward (GNAF) protocol which generalizes several known cooperative diversity protocols is proposed. Transmission in the GNAF protocol comprises of two phases - the broadcast phase and the cooperation phase. In the broadcast phase, the source broadcasts its information to the relays as well as the destination. In the cooperation phase, the source and the relays together transmit a space-time code in a distributed fashion. The GNAF protocol relaxes the constraints imposed by the protocol of Jing and Hassibi on the code structure. In Part-I of this paper, a code design criteria is obtained and it is shown that the GNAF protocol is delay efficient and coding gain efficient as well. Moreover GNAF protocol enables the use of sphere decoders at the destination with a non-exponential Maximum likelihood (ML) decoding complexity. In Part-II, several low decoding complexity code constructions are studied and a lower bound on the Diversity-Multiplexing Gain tradeoff of the GNAF protocol is obtained.
Resumo:
In many applications of wireless ad hoc networks, wireless nodes are owned by rational and intelligent users. In this paper, we call nodes selfish if they are owned by independent users and their only objective is to maximize their individual goals. In such situations, it may not be possible to use the existing protocols for wireless ad hoc networks as these protocols assume that nodes follow the prescribed protocol without deviation. Stimulating cooperation among these nodes is an interesting and challenging problem. Providing incentives and pricing the transactions are well known approaches to stimulate cooperation. In this paper, we present a game theoretic framework for truthful broadcast protocol and strategy proof pricing mechanism called Immediate Predecessor Node Pricing Mechanism (IPNPM). The phrase strategy proof here means that truth revelation of cost is a weakly dominant-strategy (in game theoretic terms) for each node. In order to steer our mechanism-design approach towards practical implementation, we compute the payments to nodes using a distributed algorithm. We also propose a new protocol for broadcast in wireless ad hoc network with selfish nodes based on IPNPM. The features of the proposed broadcast protocol are reliability and a significantly reduced number of packet forwards compared to the number of network nodes, which in turn leads to less system-wide power consumption to broadcast a single packet. Our simulation results show the efficacy of the proposed broadcast protocol.
Resumo:
CMPs enable simultaneous execution of multiple applications on the same platforms that share cache resources. Diversity in the cache access patterns of these simultaneously executing applications can potentially trigger inter-application interference, leading to cache pollution. Whereas a large cache can ameliorate this problem, the issues of larger power consumption with increasing cache size, amplified at sub-100nm technologies, makes this solution prohibitive. In this paper in order to address the issues relating to power-aware performance of caches, we propose a caching structure that addresses the following: 1. Definition of application-specific cache partitions as an aggregation of caching units (molecules). The parameters of each molecule namely size, associativity and line size are chosen so that the power consumed by it and access time are optimal for the given technology. 2. Application-Specific resizing of cache partitions with variable and adaptive associativity per cache line, way size and variable line size. 3. A replacement policy that is transparent to the partition in terms of size, heterogeneity in associativity and line size. Through simulation studies we establish the superiority of molecular cache (caches built as aggregations of molecules) that offers a 29% power advantage over that of an equivalently performing traditional cache.
Resumo:
We address the problem of distributed space-time coding with reduced decoding complexity for wireless relay network. The transmission protocol follows a two-hop model wherein the source transmits a vector in the first hop and in the second hop the relays transmit a vector, which is a transformation of the received vector by a relay-specific unitary transformation. Design criteria is derived for this system model and codes are proposed that achieve full diversity. For a fixed number of relay nodes, the general system model considered in this paper admits code constructions with lower decoding complexity compared to codes based on some earlier system models.