822 resultados para Consumerization of IT


Relevância:

90.00% 90.00%

Publicador:

Resumo:

electrostatic torsional nano-electro-mechanical systems (NEMS) actuators is analyzed in the paper. The dependence of the critical tilting angle and voltage is investigated on the sizes of structure with the consideration of vdW effects. The pull-in phenomenon without the electrostatic torque is studied, and a critical pull-in gap is derived. A dimensionless equation of motion is presented, and the qualitative analysis of it shows that the equilibrium points of the corresponding autonomous system include center points, stable focus points, and unstable saddle points. The Hopf bifurcation points and fork bifurcation points also exist in the system. The phase portraits connecting these equilibrium points exhibit periodic orbits, heteroclinic orbits, as well as homoclinic orbits.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OGY method is the most important method of controlling chaos. It stabilizes a hyperbolic periodic orbit by making small perturbations for a system parameter. This paper improves the method of choosing parameter, and gives a mathematics proof of it.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper was presented at the 11th Annual Conference of the European Society for the History of Economic Thought (ESHET).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents new evidence on the role of segregation into firms, occupations within a firm and stratification into professional categories within firm-occupations in explaining the gender wage gap. I use a generalized earnings model that allows observed and unobserved group characteristics to have different impact on wages of men and women within the same group. The database is a large sample of individual wage data from the 1995 Spanish Wage Structure Survey. Results indicate that firm segregation in our sample accounts for around one-fifth of the raw gender wage gap. Occupational segregation within firms accounts for about one-third of the raw wage gap, and stratification into different professional categories within firms and occupations explains another one-third of it. The remaining one-fifth of the overall gap arises from better outcomes of men relative to women within professional categories. It is also found that rewards to both observable and unobservable skills, particularly those related to education, are higher for males than for females within the same group. Finally, mean wages in occupations or job categories with a higher fraction of female co-workers are lower, but the negative impact of femaleness in higher for women.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The toxicity of sediments in Biscayne Bay and many adjoining tributaries was determined as part of a bioeffects assessments program managed by NOAA’s National Status and Trends Program. The objectives of the survey were to determine: (1) the incidence and degree of toxicity of sediments throughout the study area; (2) the spatial patterns (or gradients) in chemical contamination and toxicity, if any, throughout the study area; (3) the spatial extent of chemical contamination and toxicity; and (4) the statistical relationships between measures of toxicity and concentrations of chemicals in the sediments. The survey was designed to characterize sediment quality throughout the greater Biscayne Bay area. Surficial sediment samples were collected during 1995 and 1996 from 226 randomly-chosen locations throughout nine major regions. Laboratory toxicity tests were performed as indicators of potential ecotoxicological effects in sediments. A battery of tests was performed to generate information from different phases (components) of the sediments. Tests were selected to represent a range in toxicological endpoints from acute to chronic sublethal responses. Toxicological tests were conducted to measure: reduced survival of adult amphipods exposed to solid-phase sediments; impaired fertilization success and abnormal morphological development in gametes and embryos, respectively, of sea urchins exposed to pore waters; reduced metabolic activity of a marine bioluminescent bacteria exposed to organic solvent extracts; induction of a cytochrome P-450 reporter gene system in exposures to solvent extracts; and reduced reproductive success in marine copepods exposed to solid-phase sediments. Contamination and toxicity were most severe in several peripheral canals and tributaries, including the lower Miami River, adjoining the main axis of the bay. In the open basins of the bay, chemical concentrations and toxicity generally were higher in areas north of the Rickenbacker Causeway than south of it. Sediments from the main basins of the bay generally were less toxic than those from the adjoining tributaries and canals. The different toxicity tests, however, indicated differences in severity, incidence, spatial patterns, and spatial extent in toxicity. The most sensitive test among those performed on all samples, a bioassay of normal morphological development of sea urchin embryos, indicated toxicity was pervasive throughout the entire study area. The least sensitive test, an acute bioassay performed with a benthic amphipod, indicated toxicity was restricted to a very small percentage of the area. Both the degree and spatial extent of chemical contamination and toxicity in this study area were similar to or less severe than those observed in many other areas in the U.S. The spatial extent of toxicity in all four tests performed throughout the bay were comparable to the “national averages” calculated by NOAA from previous surveys conducted in a similar manner. Several trace metals occurred in concentrations in excess of those expected in reference sediments. Mixtures of substances, including pesticides, petroleum constituents, trace metals, and ammonia, were associated statistically with the measures of toxicity. Substances most elevated in concentration relative to numerical guidelines and associated with toxicity included polychlorinated biphenyls, DDT pesticides, polynuclear aromatic hydrocarbons, hexachloro cyclohexanes, lead, and mercury. These (and other) substances occurred in concentrations greater than effects-based guidelines in the samples that were most toxic in one or more of the tests. (PDF contains 180 pages)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this era of proliferating scientific information it is difficult to keep up with the literature, even in one's own field. Review articles are helpful in summarizing the status of knowledge. In oyster biology, several such published reviews have been of great help to working scientists. The outstanding contributions that come to' mind are those by Baughman (1948), Korringa (1952), Joyce (1972), Breisch and Kennedy (1980), and Kennedy and Breisch (198 I). If done well, such compilations serve as checkpoints, eliminating or vastly reducing the need to consult the literature in detail. On Long Island, New York, where the hard clam Mercenaria mercenaria is the major commercial resource, we have felt the need for some time for a compendium of knowledge on this important mollusk. Several years ago my secretary, students, and I began to gather materials for an annotated bibliography. We have already published a collection of 2233 titles (McHugh et al. 1982), nearly all accompanied by abstracts, and in this publication we have added another 460. The experience has been rewarding. We have been surprised at the extent of the literature, much of it only remotely related to the shellfish industry itself, but nevertheless throwing light on the biology, physiology, and many other aspects of the scientific knowledge of hard clams. The following bibliography is divided into three parts. Part I comprises the bulk of the bibliography, while Parts 2 and 3 contain additional titles that we decided to include during editing, submission, and approval of the manuscript for publication. All three parts are indexed together, however. We also reexamined those titles in the previous bibliography (McHugh et al. 1982) which did not include abstracts. These are included in Parts 2 and 3 of this bibliography. Most of these contained no specific reference to Mercenaria mercenaria. A few searches were terminated for various reasons. (PDF file contains 66 pages.)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

[EN] This paper is an outcome of the following dissertation:

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The main focus of this thesis is the use of high-throughput sequencing technologies in functional genomics (in particular in the form of ChIP-seq, chromatin immunoprecipitation coupled with sequencing, and RNA-seq) and the study of the structure and regulation of transcriptomes. Some parts of it are of a more methodological nature while others describe the application of these functional genomic tools to address various biological problems. A significant part of the research presented here was conducted as part of the ENCODE (ENCyclopedia Of DNA Elements) Project.

The first part of the thesis focuses on the structure and diversity of the human transcriptome. Chapter 1 contains an analysis of the diversity of the human polyadenylated transcriptome based on RNA-seq data generated for the ENCODE Project. Chapter 2 presents a simulation-based examination of the performance of some of the most popular computational tools used to assemble and quantify transcriptomes. Chapter 3 includes a study of variation in gene expression, alternative splicing and allelic expression bias on the single-cell level and on a genome-wide scale in human lymphoblastoid cells; it also brings forward a number of critical to the practice of single-cell RNA-seq measurements methodological considerations.

The second part presents several studies applying functional genomic tools to the study of the regulatory biology of organellar genomes, primarily in mammals but also in plants. Chapter 5 contains an analysis of the occupancy of the human mitochondrial genome by TFAM, an important structural and regulatory protein in mitochondria, using ChIP-seq. In Chapter 6, the mitochondrial DNA occupancy of the TFB2M transcriptional regulator, the MTERF termination factor, and the mitochondrial RNA and DNA polymerases is characterized. Chapter 7 consists of an investigation into the curious phenomenon of the physical association of nuclear transcription factors with mitochondrial DNA, based on the diverse collections of transcription factor ChIP-seq datasets generated by the ENCODE, mouseENCODE and modENCODE consortia. In Chapter 8 this line of research is further extended to existing publicly available ChIP-seq datasets in plants and their mitochondrial and plastid genomes.

The third part is dedicated to the analytical and experimental practice of ChIP-seq. As part of the ENCODE Project, a set of metrics for assessing the quality of ChIP-seq experiments was developed, and the results of this activity are presented in Chapter 9. These metrics were later used to carry out a global analysis of ChIP-seq quality in the published literature (Chapter 10). In Chapter 11, the development and initial application of an automated robotic ChIP-seq (in which these metrics also played a major role) is presented.

The fourth part presents the results of some additional projects the author has been involved in, including the study of the role of the Piwi protein in the transcriptional regulation of transposon expression in Drosophila (Chapter 12), and the use of single-cell RNA-seq to characterize the heterogeneity of gene expression during cellular reprogramming (Chapter 13).

The last part of the thesis provides a review of the results of the ENCODE Project and the interpretation of the complexity of the biochemical activity exhibited by mammalian genomes that they have revealed (Chapters 15 and 16), an overview of the expected in the near future technical developments and their impact on the field of functional genomics (Chapter 14), and a discussion of some so far insufficiently explored research areas, the future study of which will, in the opinion of the author, provide deep insights into many fundamental but not yet completely answered questions about the transcriptional biology of eukaryotes and its regulation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Energy and sustainability have become one of the most critical issues of our generation. While the abundant potential of renewable energy such as solar and wind provides a real opportunity for sustainability, their intermittency and uncertainty present a daunting operating challenge. This thesis aims to develop analytical models, deployable algorithms, and real systems to enable efficient integration of renewable energy into complex distributed systems with limited information.

The first thrust of the thesis is to make IT systems more sustainable by facilitating the integration of renewable energy into these systems. IT represents the fastest growing sectors in energy usage and greenhouse gas pollution. Over the last decade there are dramatic improvements in the energy efficiency of IT systems, but the efficiency improvements do not necessarily lead to reduction in energy consumption because more servers are demanded. Further, little effort has been put in making IT more sustainable, and most of the improvements are from improved "engineering" rather than improved "algorithms". In contrast, my work focuses on developing algorithms with rigorous theoretical analysis that improve the sustainability of IT. In particular, this thesis seeks to exploit the flexibilities of cloud workloads both (i) in time by scheduling delay-tolerant workloads and (ii) in space by routing requests to geographically diverse data centers. These opportunities allow data centers to adaptively respond to renewable availability, varying cooling efficiency, and fluctuating energy prices, while still meeting performance requirements. The design of the enabling algorithms is however very challenging because of limited information, non-smooth objective functions and the need for distributed control. Novel distributed algorithms are developed with theoretically provable guarantees to enable the "follow the renewables" routing. Moving from theory to practice, I helped HP design and implement industry's first Net-zero Energy Data Center.

The second thrust of this thesis is to use IT systems to improve the sustainability and efficiency of our energy infrastructure through data center demand response. The main challenges as we integrate more renewable sources to the existing power grid come from the fluctuation and unpredictability of renewable generation. Although energy storage and reserves can potentially solve the issues, they are very costly. One promising alternative is to make the cloud data centers demand responsive. The potential of such an approach is huge.

To realize this potential, we need adaptive and distributed control of cloud data centers and new electricity market designs for distributed electricity resources. My work is progressing in both directions. In particular, I have designed online algorithms with theoretically guaranteed performance for data center operators to deal with uncertainties under popular demand response programs. Based on local control rules of customers, I have further designed new pricing schemes for demand response to align the interests of customers, utility companies, and the society to improve social welfare.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis is divided into three chapters. In the first chapter we study the smooth sets with respect to a Borel equivalence realtion E on a Polish space X. The collection of smooth sets forms σ-ideal. We think of smooth sets as analogs of countable sets and we show that an analog of the perfect set theorem for Σ11 sets holds in the context of smooth sets. We also show that the collection of Σ11 smooth sets is ∏11 on the codes. The analogs of thin sets are called sparse sets. We prove that there is a largest ∏11 sparse set and we give a characterization of it. We show that in L there is a ∏11 sparse set which is not smooth. These results are analogs of the results known for the ideal of countable sets, but it remains open to determine if large cardinal axioms imply that ∏11 sparse sets are smooth. Some more specific results are proved for the case of a countable Borel equivalence relation. We also study I(E), the σ-ideal of closed E-smooth sets. Among other things we prove that E is smooth iff I(E) is Borel.

In chapter 2 we study σ-ideals of compact sets. We are interested in the relationship between some descriptive set theoretic properties like thinness, strong calibration and the covering property. We also study products of σ-ideals from the same point of view. In chapter 3 we show that if a σ-ideal I has the covering property (which is an abstract version of the perfect set theorem for Σ11 sets), then there is a largest ∏11 set in Iint (i.e., every closed subset of it is in I). For σ-ideals on 2ω we present a characterization of this set in a similar way as for C1, the largest thin ∏11 set. As a corollary we get that if there are only countable many reals in L, then the covering property holds for Σ12 sets.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This review summarizes the findings of 5 years' research (June 1970-June 1975) on the meres of the Shropshire-Cheshire Plain. A mere is a small, shallow lake; supplied principally by ground water, whose chemical composition is infkuenced by the glacial frift through which it is percolating. The seasonal periodicity of the phytoplankton in the meres involved work mainly in the Grose Mere. Here diatoms were typically dominant in Feb & March, green algae in April & May, blue-green algae in early summer and dinoflagellates in late summer. This pattern is broadly similar from year to year, and has been suggested to be representative of a 'regional type'; it is also similar to that described for many of the world's mildly eutrophic temperate lakes. Vertical distribution of phytoplankton is influenced by their buoyancy (or lack of it) of by their ability to swim. A stylized depth-time distribution of 4 major phytoplankton components in Crose Mere is given diagrammatically.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The microwave response of the superconducting state in equilibrium and non-equilibrium configurations was examined experimentally and analytically. Thin film superconductors were mostly studied in order to explore spatial effects. The response parameter measured was the surface impedance.

For small microwave intensity the surface impedance at 10 GHz was measured for a variety of samples (mostly Sn) over a wide range of sample thickness and temperature. A detailed analysis based on the BCS theory was developed for calculating the surface impedance for general thickness and other experimental parameters. Experiment and theory agreed with each other to within the experimental accuracy. Thus it was established that the samples, thin films as well as bulk, were well characterised at low microwave powers (near equilibrium).

Thin films were perturbed by a small dc supercurrent and the effect on the superconducting order parameter and the quasiparticle response determined by measuring changes in the surface resistance (still at low microwave intensity and independent of it) due to the induced current. The use of fully superconducting resonators enabled the measurement of very small changes in the surface resistance (< 10-9 Ω/sq.). These experiments yield information regarding the dynamics of the order parameter and quasiparticle systems. For all the films studied the results could be described at temperatures near Tc by the thermodynamic depression of the order parameter due to the static current leading to a quadratic increase of the surface resistance with current.

For the thinnest films the low temperature results were surprising in that the surface resistance decreased with increasing current. An explanation is proposed according to which this decrease occurs due to an additional high frequency quasiparticle current caused by the combined presence of both static and high frequency fields. For frequencies larger than the inverse of the quasiparticle relaxation time this additional current is out of phase (by π) with the microwave electric field and is observed as a decrease of surface resistance. Calculations agree quantitatively with experimental results. This is the first observation and explanation of this non-equilibrium quasiparticle effect.

For thicker films of Sn, the low temperature surface resistance was found to increase with applied static current. It is proposed that due to the spatial non-uniformity of the induced current distribution across the thicker films, the above purely temporal analysis of the local quasiparticle response needs to be generalised to include space and time non-equilibrium effects.

The nonlinear interaction of microwaves arid superconducting films was also examined in a third set of experiments. The surface impedance of thin films was measured as a function of the incident microwave magnetic field. The experiments exploit the ability to measure the absorbed microwave power and applied microwave magnetic field absolutely. It was found that the applied surface microwave field could not be raised above a certain threshold level at which the absorption increased abruptly. This critical field level represents a dynamic critical field and was found to be associated with the penetration of the app1ied field into the film at values well below the thermodynamic critical field for the configuration of a field applied to one side of the film. The penetration occurs despite the thermal stability of the film which was unequivocally demonstrated by experiment. A new mechanism for such penetration via the formation of a vortex-antivortex pair is proposed. The experimental results for the thinnest films agreed with the calculated values of this pair generation field. The observations of increased transmission at the critical field level and suppression of the process by a metallic ground plane further support the proposed model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Accurate, analytical series expressions for the far-field diffraction of it Gaussian beam normally incident on a circular and central obscured aperture are derived with the help of the integration of parts method. With this expression, the far-field intensity distribution pattern can be obtained and the divergence angle is deduced too. Using the first five items of the series, the accuracy can satisfy most laser application fields. Compared with the conventional numerical integral method, the series representation is very convenient for understanding the physical meanings. (C) 2007 Elsevier GmbH. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the recent history of psychology and cognitive neuroscience, the notion of habit has been reduced to a stimulus-triggered response probability correlation. In this paper we use a computational model to present an alternative theoretical view (with some philosophical implications), where habits are seen as self-maintaining patterns of behavior that share properties in common with self-maintaining biological processes, and that inhabit a complex ecological context, including the presence and influence of other habits. Far from mechanical automatisms, this organismic and self-organizing concept of habit can overcome the dominating atomistic and statistical conceptions, and the high temporal resolution effects of situatedness, embodiment and sensorimotor loops emerge as playing a more central, subtle and complex role in the organization of behavior. The model is based on a novel "iterant deformable sensorimotor medium (IDSM)," designed such that trajectories taken through sensorimotor-space increase the likelihood that in the future, similar trajectories will be taken. We couple the IDSM to sensors and motors of a simulated robot, and show that under certain conditions, the IDSM conditions, the IDSM forms self-maintaining patterns of activity that operate across the IDSM, the robot's body, and the environment. We present various environments and the resulting habits that form in them. The model acts as an abstraction of habits at a much needed sensorimotor "meso-scale" between microscopic neuron-based models and macroscopic descriptions of behavior. Finally, we discuss how this model and extensions of it can help us understand aspects of behavioral self-organization, historicity and autonomy that remain out of the scope of contemporary representationalist frameworks.