989 resultados para variable smoothing constant
Resumo:
We show that if a language is recognized within certain error bounds by constant-depth quantum circuits over a finite family of gates, then it is computable in (classical) polynomial time. In particular, our results imply EQNC^0 ⊆ P, where EQNC^0 is the constant-depth analog of the class EQP. On the other hand, we adapt and extend ideas of Terhal and DiVincenzo [?] to show that, for any family
Resumo:
We present a distributed indexing scheme for peer to peer networks. Past work on distributed indexing traded off fast search times with non-constant degree topologies or network-unfriendly behavior such as flooding. In contrast, the scheme we present optimizes all three of these performance measures. That is, we provide logarithmic round searches while maintaining connections to a fixed number of peers and avoiding network flooding. In comparison to the well known scheme Chord, we provide competitive constant factors. Finally, we observe that arbitrary linear speedups are possible and discuss both a general brute force approach and specific economical optimizations.
Resumo:
In this paper we present Statistical Rate Monotonic Scheduling (SRMS), a generalization of the classical RMS results of Liu and Layland that allows scheduling periodic tasks with highly variable execution times and statistical QoS requirements. Similar to RMS, SRMS has two components: a feasibility test and a scheduling algorithm. The feasibility test for SRMS ensures that using SRMS' scheduling algorithms, it is possible for a given periodic task set to share a given resource (e.g. a processor, communication medium, switching device, etc.) in such a way that such sharing does not result in the violation of any of the periodic tasks QoS constraints. The SRMS scheduling algorithm incorporates a number of unique features. First, it allows for fixed priority scheduling that keeps the tasks' value (or importance) independent of their periods. Second, it allows for job admission control, which allows the rejection of jobs that are not guaranteed to finish by their deadlines as soon as they are released, thus enabling the system to take necessary compensating actions. Also, admission control allows the preservation of resources since no time is spent on jobs that will miss their deadlines anyway. Third, SRMS integrates reservation-based and best-effort resource scheduling seamlessly. Reservation-based scheduling ensures the delivery of the minimal requested QoS; best-effort scheduling ensures that unused, reserved bandwidth is not wasted, but rather used to improve QoS further. Fourth, SRMS allows a system to deal gracefully with overload conditions by ensuring a fair deterioration in QoS across all tasks---as opposed to penalizing tasks with longer periods, for example. Finally, SRMS has the added advantage that its schedulability test is simple and its scheduling algorithm has a constant overhead in the sense that the complexity of the scheduler is not dependent on the number of the tasks in the system. We have evaluated SRMS against a number of alternative scheduling algorithms suggested in the literature (e.g. RMS and slack stealing), as well as refinements thereof, which we describe in this paper. Consistently throughout our experiments, SRMS provided the best performance. In addition, to evaluate the optimality of SRMS, we have compared it to an inefficient, yet optimal scheduler for task sets with harmonic periods.
Resumo:
Quality of Service (QoS) guarantees are required by an increasing number of applications to ensure a minimal level of fidelity in the delivery of application data units through the network. Application-level QoS does not necessarily follow from any transport-level QoS guarantees regarding the delivery of the individual cells (e.g. ATM cells) which comprise the application's data units. The distinction between application-level and transport-level QoS guarantees is due primarily to the fragmentation that occurs when transmitting large application data units (e.g. IP packets, or video frames) using much smaller network cells, whereby the partial delivery of a data unit is useless; and, bandwidth spent to partially transmit the data unit is wasted. The data units transmitted by an application may vary in size while being constant in rate, which results in a variable bit rate (VBR) data flow. That data flow requires QoS guarantees. Statistical multiplexing is inadequate, because no guarantees can be made and no firewall property exists between different data flows. In this paper, we present a novel resource management paradigm for the maintenance of application-level QoS for VBR flows. Our paradigm is based on Statistical Rate Monotonic Scheduling (SRMS), in which (1) each application generates its variable-size data units at a fixed rate, (2) the partial delivery of data units is of no value to the application, and (3) the QoS guarantee extended to the application is the probability that an arbitrary data unit will be successfully transmitted through the network to/from the application.
Resumo:
This paper proposes a method for detecting shapes of variable structure in images with clutter. The term "variable structure" means that some shape parts can be repeated an arbitrary number of times, some parts can be optional, and some parts can have several alternative appearances. The particular variation of the shape structure that occurs in a given image is not known a priori. Existing computer vision methods, including deformable model methods, were not designed to detect shapes of variable structure; they may only be used to detect shapes that can be decomposed into a fixed, a priori known, number of parts. The proposed method can handle both variations in shape structure and variations in the appearance of individual shape parts. A new class of shape models is introduced, called Hidden State Shape Models, that can naturally represent shapes of variable structure. A detection algorithm is described that finds instances of such shapes in images with large amounts of clutter by finding globally optimal correspondences between image features and shape models. Experiments with real images demonstrate that our method can localize plant branches that consist of an a priori unknown number of leaves and can detect hands more accurately than a hand detector based on the chamfer distance.
Resumo:
Speech can be understood at widely varying production rates. A working memory is described for short-term storage of temporal lists of input items. The working memory is a cooperative-competitive neural network that automatically adjusts its integration rate, or gain, to generate a short-term memory code for a list that is independent of item presentation rate. Such an invariant working memory model is used to simulate data of Repp (1980) concerning the changes of phonetic category boundaries as a function of their presentation rate. Thus the variability of categorical boundaries can be traced to the temporal in variance of the working memory code.
Resumo:
Neural network models of working memory, called Sustained Temporal Order REcurrent (STORE) models, are described. They encode the invariant temporal order of sequential events in short term memory (STM) in a way that mimics cognitive data about working memory, including primacy, recency, and bowed order and error gradients. As new items are presented, the pattern of previously stored items is invariant in the sense that, relative activations remain constant through time. This invariant temporal order code enables all possible groupings of sequential events to be stably learned and remembered in real time, even as new events perturb the system. Such a competence is needed to design self-organizing temporal recognition and planning systems in which any subsequence of events may need to be categorized in order to to control and predict future behavior or external events. STORE models show how arbitrary event sequences may be invariantly stored, including repeated events. A preprocessor interacts with the working memory to represent event repeats in spatially separate locations. It is shown why at least two processing levels are needed to invariantly store events presented with variable durations and interstimulus intervals. It is also shown how network parameters control the type and shape of primacy, recency, or bowed temporal order gradients that will be stored.
Resumo:
The objective of this paper is to investigate the effect of the pad size ratio between the chip and board end of a solder joint on the shape of that solder joint in combination with the solder volume available. The shape of the solder joint is correlated to its reliability and thus of importance. For low density chip bond pad applications Flip Chip (FC) manufacturing costs can be kept down by using larger size board pads suitable for solder application. By using “Surface Evolver” software package the solder joint shapes associated with different size/shape solder preforms and chip/board pad ratios are predicted. In this case a so called Flip-Chip Over Hole (FCOH) assembly format has been used. Assembly trials involved the deposition of lead-free 99.3Sn0.7Cu solder on the board side, followed by reflow, an underfill process and back die encapsulation. During the assembly work pad off-sets occurred that have been taken into account for the Surface Evolver solder joint shape prediction and accurately matched the real assembly. Overall, good correlation was found between the simulated solder joint shape and the actual fabricated solder joint shapes. Solder preforms were found to exhibit better control over the solder volume. Reflow simulation of commercially available solder preform volumes suggests that for a fixed stand-off height and chip-board pad ratio, the solder volume value and the surface tension determines the shape of the joint.
Resumo:
The last 30 years have seen Fuzzy Logic (FL) emerging as a method either complementing or challenging stochastic methods as the traditional method of modelling uncertainty. But the circumstances under which FL or stochastic methods should be used are shrouded in disagreement, because the areas of application of statistical and FL methods are overlapping with differences in opinion as to when which method should be used. Lacking are practically relevant case studies comparing these two methods. This work compares stochastic and FL methods for the assessment of spare capacity on the example of pharmaceutical high purity water (HPW) utility systems. The goal of this study was to find the most appropriate method modelling uncertainty in industrial scale HPW systems. The results provide evidence which suggests that stochastic methods are superior to the methods of FL in simulating uncertainty in chemical plant utilities including HPW systems in typical cases whereby extreme events, for example peaks in demand, or day-to-day variation rather than average values are of interest. The average production output or other statistical measures may, for instance, be of interest in the assessment of workshops. Furthermore the results indicate that the stochastic model should be used only if found necessary by a deterministic simulation. Consequently, this thesis concludes that either deterministic or stochastic methods should be used to simulate uncertainty in chemical plant utility systems and by extension some process system because extreme events or the modelling of day-to-day variation are important in capacity extension projects. Other reasons supporting the suggestion that stochastic HPW models are preferred to FL HPW models include: 1. The computer code for stochastic models is typically less complex than a FL models, thus reducing code maintenance and validation issues. 2. In many respects FL models are similar to deterministic models. Thus the need for a FL model over a deterministic model is questionable in the case of industrial scale HPW systems as presented here (as well as other similar systems) since the latter requires simpler models. 3. A FL model may be difficult to "sell" to an end-user as its results represent "approximate reasoning" a definition of which is, however, lacking. 4. Stochastic models may be applied with some relatively minor modifications on other systems, whereas FL models may not. For instance, the stochastic HPW system could be used to model municipal drinking water systems, whereas the FL HPW model should or could not be used on such systems. This is because the FL and stochastic model philosophies of a HPW system are fundamentally different. The stochastic model sees schedule and volume uncertainties as random phenomena described by statistical distributions based on either estimated or historical data. The FL model, on the other hand, simulates schedule uncertainties based on estimated operator behaviour e.g. tiredness of the operators and their working schedule. But in a municipal drinking water distribution system the notion of "operator" breaks down. 5. Stochastic methods can account for uncertainties that are difficult to model with FL. The FL HPW system model does not account for dispensed volume uncertainty, as there appears to be no reasonable method to account for it with FL whereas the stochastic model includes volume uncertainty.
Resumo:
The aim of this study was to develop a methodology, based on satellite remote sensing, to estimate the vegetation Start of Season (SOS) across the whole island of Ireland on an annual basis. This growing body of research is known as Land Surface Phenology (LSP) monitoring. The SOS was estimated for each year from a 7-year time series of 10-day composited, 1.2 km reduced resolution MERIS Global Vegetation Index (MGVI) data from 2003 to 2009, using the time series analysis software, TIMESAT. The selection of a 10-day composite period was guided by in-situ observations of leaf unfolding and cloud cover at representative point locations on the island. The MGVI time series was smoothed and the SOS metric extracted at a point corresponding to 20% of the seasonal MGVI amplitude. The SOS metric was extracted on a per pixel basis and gridded for national scale coverage. There were consistent spatial patterns in the SOS grids which were replicated on an annual basis and were qualitatively linked to variation in landcover. Analysis revealed that three statistically separable groups of CORINE Land Cover (CLC) classes could be derived from differences in the SOS, namely agricultural and forest land cover types, peat bogs, and natural and semi-natural vegetation types. These groups demonstrated that managed vegetation, e.g. pastures has a significantly earlier SOS than in unmanaged vegetation e.g. natural grasslands. There was also interannual spatio-temporal variability in the SOS. Such variability was highlighted in a series of anomaly grids showing variation from the 7-year mean SOS. An initial climate analysis indicated that an anomalously cold winter and spring in 2005/2006, linked to a negative North Atlantic Oscillation index value, delayed the 2006 SOS countrywide, while in other years the SOS anomalies showed more complex variation. A correlation study using air temperature as a climate variable revealed the spatial complexity of the air temperature-SOS relationship across the Republic of Ireland as the timing of maximum correlation varied from November to April depending on location. The SOS was found to occur earlier due to warmer winters in the Southeast while it was later with warmer winters in the Northwest. The inverse pattern emerged in the spatial patterns of the spring correlates. This contrasting pattern would appear to be linked to vegetation management as arable cropping is typically practiced in the southeast while there is mixed agriculture and mostly pastures to the west. Therefore, land use as well as air temperature appears to be an important determinant of national scale patterns in the SOS. The TIMESAT tool formed a crucial component of the estimation of SOS across the country in all seven years as it minimised the negative impact of noise and data dropouts in the MGVI time series by applying a smoothing algorithm. The extracted SOS metric was sensitive to temporal and spatial variation in land surface vegetation seasonality while the spatial patterns in the gridded SOS estimates aligned with those in landcover type. The methodology can be extended for a longer time series of FAPAR as MERIS will be replaced by the ESA Sentinel mission in 2013, while the availability of full resolution (300m) MERIS FAPAR and equivalent sensor products holds the possibility of monitoring finer scale seasonality variation. This study has shown the utility of the SOS metric as an indicator of spatiotemporal variability in vegetation phenology, as well as a correlate of other environmental variables such as air temperature. However, the satellite-based method is not seen as a replacement of ground-based observations, but rather as a complementary approach to studying vegetation phenology at the national scale. In future, the method can be extended to extract other metrics of the seasonal cycle in order to gain a more comprehensive view of seasonal vegetation development.
Resumo:
Nanostructured materials are central to the evolution of future electronics and information technologies. Ferroelectrics have already been established as a dominant branch in the electronics sector because of their diverse application range such as ferroelectric memories, ferroelectric tunnel junctions, etc. The on-going dimensional downscaling of materials to allow packing of increased numbers of components onto integrated circuits provides the momentum for the evolution of nanostructured ferroelectric materials and devices. Nanoscaling of ferroelectric materials can result in a modification of their functionality, such as phase transition temperature or Curie temperature (TC), domain dynamics, dielectric constant, coercive field, spontaneous polarisation and piezoelectric response. Furthermore, nanoscaling can be used to form high density arrays of monodomain ferroelectric nanostructures, which is desirable for the miniaturisation of memory devices. This thesis details the use of various types of nanostructuring approaches to fabricate arrays of ferroelectric nanostructures, particularly non-oxide based systems. The introductory chapter reviews some exemplary research breakthroughs in the synthesis, characterisation and applications of nanoscale ferroelectric materials over the last decade, with priority given to novel synthetic strategies. Chapter 2 provides an overview of the experimental methods and characterisation tools used to produce and probe the properties of nanostructured antimony sulphide (Sb2S3), antimony sulpho iodide (SbSI) and lead titanate zirconate (PZT). In particular, Chapter 2 details the general principles of piezoresponse microscopy (PFM). Chapter 3 highlights the fabrication of arrays of Sb2S3 nanowires with variable diameters using newly developed solventless template-based approach. A detailed account of domain imaging and polarisation switching of these nanowire arrays is also provided. Chapter 4 details the preparation of vertically aligned arrays of SbSI nanorods and nanowires using a surface-roughness assisted vapour-phase deposition method. The qualitative and quantitative nanoscale ferroelectric properties of these nanostructures are also discussed. Chapter 5 highlights the fabrication of highly ordered arrays of PZT nanodots using block copolymer self-assembled templates and their ferroelectric characterisation using PFM. Chapter 6 summarises the conclusions drawn from the results reported in chapters 3, 4 and 5 and the future work.
Resumo:
The thesis initially gives an overview of the wave industry and the current state of some of the leading technologies as well as the energy storage systems that are inherently part of the power take-off mechanism. The benefits of electrical energy storage systems for wave energy converters are then outlined as well as the key parameters required from them. The options for storage systems are investigated and the reasons for examining supercapacitors and lithium-ion batteries in more detail are shown. The thesis then focusses on a particular type of offshore wave energy converter in its analysis, the backward bent duct buoy employing a Wells turbine. Variable speed strategies from the research literature which make use of the energy stored in the turbine inertia are examined for this system, and based on this analysis an appropriate scheme is selected. A supercapacitor power smoothing approach is presented in conjunction with the variable speed strategy. As long component lifetime is a requirement for offshore wave energy converters, a computer-controlled test rig has been built to validate supercapacitor lifetimes to manufacturer’s specifications. The test rig is also utilised to determine the effect of temperature on supercapacitors, and determine application lifetime. Cycle testing is carried out on individual supercapacitors at room temperature, and also at rated temperature utilising a thermal chamber and equipment programmed through the general purpose interface bus by Matlab. Application testing is carried out using time-compressed scaled-power profiles from the model to allow a comparison of lifetime degradation. Further applications of supercapacitors in offshore wave energy converters are then explored. These include start-up of the non-self-starting Wells turbine, and low-voltage ride-through examined to the limits specified in the Irish grid code for wind turbines. These applications are investigated with a more complete model of the system that includes a detailed back-to-back converter coupling a permanent magnet synchronous generator to the grid. Supercapacitors have been utilised in combination with battery systems for many applications to aid with peak power requirements and have been shown to improve the performance of these energy storage systems. The design, implementation, and construction of coupling a 5 kW h lithium-ion battery to a microgrid are described. The high voltage battery employed a continuous power rating of 10 kW and was designed for the future EV market with a controller area network interface. This build gives a general insight to some of the engineering, planning, safety, and cost requirements of implementing a high power energy storage system near or on an offshore device for interface to a microgrid or grid.
Resumo:
Solar Energy is a clean and abundant energy source that can help reduce reliance on fossil fuels around which questions still persist about their contribution to climate and long-term availability. Monolithic triple-junction solar cells are currently the state of the art photovoltaic devices with champion cell efficiencies exceeding 40%, but their ultimate efficiency is restricted by the current-matching constraint of series-connected cells. The objective of this thesis was to investigate the use of solar cells with lattice constants equal to InP in order to reduce the constraint of current matching in multi-junction solar cells. This was addressed by two approaches: Firstly, the formation of mechanically stacked solar cells (MSSC) was investigated through the addition of separate connections to individual cells that make up a multi-junction device. An electrical and optical modelling approach identified separately connected InGaAs bottom cells stacked under dual-junction GaAs based top cells as a route to high efficiency. An InGaAs solar cell was fabricated on an InP substrate with a measured 1-Sun conversion efficiency of 9.3%. A comparative study of adhesives found benzocyclobutene to be the most suitable for bonding component cells in a mechanically stacked configuration owing to its higher thermal conductivity and refractive index when compared to other candidate adhesives. A flip-chip process was developed to bond single-junction GaAs and InGaAs cells with a measured 4-terminal MSSC efficiency of 25.2% under 1-Sun conditions. Additionally, a novel InAlAs solar cell was identified, which can be used to provide an alternative to the well established GaAs solar cell. As wide bandgap InAlAs solar cells have not been extensively investigated for use in photovoltaics, single-junction cells were fabricated and their properties relevant to PV operation analysed. Minority carrier diffusion lengths in the micrometre range were extracted, confirming InAlAs as a suitable material for use in III-V solar cells, and a 1-Sun conversion efficiency of 6.6% measured for cells with 800 nm thick absorber layers. Given the cost and small diameter of commercially available InP wafers, InGaAs and InAlAs solar cells were fabricated on alternative substrates, namely GaAs. As a first demonstration the lattice constant of a GaAs substrate was graded to InP using an InxGa1-xAs metamorphic buffer layer onto which cells were grown. This was the first demonstration of an InAlAs solar cell on an alternative substrate and an initial step towards fabricating these cells on Si. The results presented offer a route to developing multi-junction solar cell devices based on the InP lattice parameter, thus extending the range of available bandgaps for high efficiency cells.
Resumo:
Consensus HIV-1 genes can decrease the genetic distances between candidate immunogens and field virus strains. To ensure the functionality and optimal presentation of immunologic epitopes, we generated two group-M consensus env genes that contain variable regions either from a wild-type B/C recombinant virus isolate (CON6) or minimal consensus elements (CON-S) in the V1, V2, V4, and V5 regions. C57BL/6 and BALB/c mice were primed twice with CON6, CON-S, and subtype control (92UG37_A and HXB2/Bal_B) DNA and boosted with recombinant vaccinia virus (rVV). Mean antibody titers against 92UG37_A, 89.6_B, 96ZM651_C, CON6, and CON-S Env protein were determined. Both CON6 and CON-S induced higher mean antibody titers against several of the proteins, as compared with the subtype controls. However, no significant differences were found in mean antibody titers in animals immunized with CON6 or CON-S. Cellular immune responses were measured by using five complete Env overlapping peptide sets: subtype A (92UG37_A), subtype B (MN_B, 89.6_B and SF162_B), and subtype C (Chn19_C). The intensity of the induced cellular responses was measured by using pooled Env peptides; T-cell epitopes were identified by using matrix peptide pools and individual peptides. No significant differences in T-cell immune-response intensities were noted between CON6 and CON-S immunized BALB/c and C57BL/6 mice. In BALB/c mice, 10 and eight nonoverlapping T-cell epitopes were identified in CON6 and CON-S, whereas eight epitopes were identified in 92UG37_A and HXB2/BAL_B. In C57BL/6 mice, nine and six nonoverlapping T-cell epitopes were identified after immunization with CON6 and CON-S, respectively, whereas only four and three were identified in 92UG37_A and HXB2/BAL_B, respectively. When combined together from both mouse strains, 18 epitopes were identified. The group M artificial consensus env genes, CON6 and CON-S, were equally immunogenic in breadth and intensity for inducing humoral and cellular immune responses.
Resumo:
In a stochastic environment, long-term fitness can be influenced by variation, covariation, and serial correlation in vital rates (survival and fertility). Yet no study of an animal population has parsed the contributions of these three aspects of variability to long-term fitness. We do so using a unique database that includes complete life-history information for wild-living individuals of seven primate species that have been the subjects of long-term (22-45 years) behavioral studies. Overall, the estimated levels of vital rate variation had only minor effects on long-term fitness, and the effects of vital rate covariation and serial correlation were even weaker. To explore why, we compared estimated variances of adult survival in primates with values for other vertebrates in the literature and found that adult survival is significantly less variable in primates than it is in the other vertebrates. Finally, we tested the prediction that adult survival, because it more strongly influences fitness in a constant environment, will be less variable than newborn survival, and we found only mixed support for the prediction. Our results suggest that wild primates may be buffered against detrimental fitness effects of environmental stochasticity by their highly developed cognitive abilities, social networks, and broad, flexible diets.