907 resultados para Non-uniform flow
Resumo:
Routing techniques used in wavelength routed optical networks (WRN) do not give an efficient solution with Waveband routed optical networks (WBN) as the objective of routing in WRN is to reduce the blocking probability and that in WBN is to reduce the number of switching ports. Routing in WBN can be divided two parts, finding the route and grouping the wavelength assigned into that route with some existing wavelengths/wavebands. In this paper, we propose a heuristic for waveband routing, which uses a new grouping strategy called discontinuous waveband grouping to group the wavelengths into a waveband. The main objective of our algorithm is to decrease the total number of ports required and reduce the blocking probability of the network. The performance of the heuristic is analyzed using simulation on a WBN with non-uniform wavebands.
Resumo:
The next-generation SONET metro network is evolving into a service-rich infrastructure. At the edge of such a network, multi-service provisioning platforms (MSPPs) provide efficient data mapping enabled by Generic Framing Procedure (GFP) and Virtual Concatenation (VC). The core of the network tends to be a meshed architecture equipped with Multi-Service Switches (MSSs). In the context of these emerging technologies, we propose a load-balancing spare capacity reallocation approach to improve network utilization in the next-generation SONET metro networks. Using our approach, carriers can postpone network upgrades, resulting in increased revenue with reduced capital expenditures (CAPEX). For the first time, we consider the spare capacity reallocation problem from a capacity upgrade and network planning perspective. Our approach can operate in the context of shared-path protection (with backup multiplexing) because it reallocates spare capacity without disrupting working services. Unlike previous spare capacity reallocation approaches which aim at minimizing total spare capacity, our load-balancing approach minimizes the network load vector (NLV), which is a novel metric that reflects the network load distribution. Because NLV takes into consideration both uniform and non-uniform link capacity distribution, our approach can benefit both uniform and non-uniform networks. We develop a greedy loadbalancing spare capacity reallocation (GLB-SCR) heuristic algorithm to implement this approach. Our experimental results show that GLB-SCR outperforms a previously proposed algorithm (SSR) in terms of established connection capacity and total network capacity in both uniform and non-uniform networks.
Resumo:
Purpose - The purpose of this paper is to develop an efficient numerical algorithm for the self-consistent solution of Schrodinger and Poisson equations in one-dimensional systems. The goal is to compute the charge-control and capacitance-voltage characteristics of quantum wire transistors. Design/methodology/approach - The paper presents a numerical formulation employing a non-uniform finite difference discretization scheme, in which the wavefunctions and electronic energy levels are obtained by solving the Schrodinger equation through the split-operator method while a relaxation method in the FTCS scheme ("Forward Time Centered Space") is used to solve the two-dimensional Poisson equation. Findings - The numerical model is validated by taking previously published results as a benchmark and then applying them to yield the charge-control characteristics and the capacitance-voltage relationship for a split-gate quantum wire device. Originality/value - The paper helps to fulfill the need for C-V models of quantum wire device. To do so, the authors implemented a straightforward calculation method for the two-dimensional electronic carrier density n(x,y). The formulation reduces the computational procedure to a much simpler problem, similar to the one-dimensional quantization case, significantly diminishing running time.
Resumo:
Die vorliegende Arbeit untersucht, wie sich Greenaways Filme selbstreflexiv zur Problematik des filmischen Mediums stellen, d.h. wie der Illusions- und Artefaktcharakter des Films im Film selbst thematisiert wird. Die Untersuchung der Selbstreflexion wird konkret unter drei Untersuchungsaspekten erfolgt, nämlich die systematisch und künstlich organisierte formale Struktur, die Narrativität und die Wahrnehmungsweise des Zuschauers. Greenaways Filme veranschaulichen auf der formalen Ebene:· daß die Filmbilder diskontinuierlich und uneinheitlich sind, · wie einzelnen visuellen, akustischen und technischen Zeichen systematisch und künstlich organisiert sind und schließlich· wie die diskontinuierlichen und uneinheitlichen Filmbilder durch die systematische und künstliche Organisation der Zeichen kontinuierlich und einheitlich wirken. Seine Filme thematisieren auch auf der allegorischen, symbolischen und metaphorischen Ebene das Verhältnis zwischen der formalen Struktur, der Geschichte und der interaktiven Wahrnehmungsweise des Zuschauers, und die Beziehung zwischen dem Zuschauer, dem Film, dem Filmemacher. Die männliche Hauptfigur metaphorisiert den Zuschauer. Die Frauenfiguren allegorisieren die zwei Seiten des Films, die Form und den Inhalt des Films. Die sexuelle Beziehung zwischen der männlichen Hauptfigur und den Frauen umfaßt auf der metaphorischen Ebene die Interaktivität des Zuschauers mit dem Film.
Resumo:
Electronic applications are nowadays converging under the umbrella of the cloud computing vision. The future ecosystem of information and communication technology is going to integrate clouds of portable clients and embedded devices exchanging information, through the internet layer, with processing clusters of servers, data-centers and high performance computing systems. Even thus the whole society is waiting to embrace this revolution, there is a backside of the story. Portable devices require battery to work far from the power plugs and their storage capacity does not scale as the increasing power requirement does. At the other end processing clusters, such as data-centers and server farms, are build upon the integration of thousands multiprocessors. For each of them during the last decade the technology scaling has produced a dramatic increase in power density with significant spatial and temporal variability. This leads to power and temperature hot-spots, which may cause non-uniform ageing and accelerated chip failure. Nonetheless all the heat removed from the silicon translates in high cooling costs. Moreover trend in ICT carbon footprint shows that run-time power consumption of the all spectrum of devices accounts for a significant slice of entire world carbon emissions. This thesis work embrace the full ICT ecosystem and dynamic power consumption concerns by describing a set of new and promising system levels resource management techniques to reduce the power consumption and related issues for two corner cases: Mobile Devices and High Performance Computing.
Resumo:
In this thesis we have developed solutions to common issues regarding widefield microscopes, facing the problem of the intensity inhomogeneity of an image and dealing with two strong limitations: the impossibility of acquiring either high detailed images representative of whole samples or deep 3D objects. First, we cope with the problem of the non-uniform distribution of the light signal inside a single image, named vignetting. In particular we proposed, for both light and fluorescent microscopy, non-parametric multi-image based methods, where the vignetting function is estimated directly from the sample without requiring any prior information. After getting flat-field corrected images, we studied how to fix the problem related to the limitation of the field of view of the camera, so to be able to acquire large areas at high magnification. To this purpose, we developed mosaicing techniques capable to work on-line. Starting from a set of overlapping images manually acquired, we validated a fast registration approach to accurately stitch together the images. Finally, we worked to virtually extend the field of view of the camera in the third dimension, with the purpose of reconstructing a single image completely in focus, stemming from objects having a relevant depth or being displaced in different focus planes. After studying the existing approaches for extending the depth of focus of the microscope, we proposed a general method that does not require any prior information. In order to compare the outcome of existing methods, different standard metrics are commonly used in literature. However, no metric is available to compare different methods in real cases. First, we validated a metric able to rank the methods as the Universal Quality Index does, but without needing any reference ground truth. Second, we proved that the approach we developed performs better in both synthetic and real cases.
Resumo:
Le superfici di suddivisione sono un ottimo ed importante strumento utilizzato principalmente nell’ambito dell’animazione 3D poichè consentono di definire superfici di forma arbitraria. Questa tecnologia estende il concetto di B-spline e permette di avere un’estrema libertà dei vincoli topologici. Per definire superfici di forma arbitraria esistono anche le Non-Uniform Rational B-Splines (NURBS) ma non lasciano abbastanza libertà per la costruzione di forme libere. Infatti, a differenza delle superfici di suddivisione, hanno bisogno di unire vari pezzi della superficie (trimming). La tecnologia NURBS quindi viene utilizzata prevalentemente negli ambienti CAD mentre nell’ambito della Computer Graphics si è diffuso ormai da più di 30 anni, l’utilizzo delle superfici di suddivisione. Lo scopo di questa tesi è quello di riassumere, quindi, i concetti riguardo questa tecnologia, di analizzare alcuni degli schemi di suddivisione più utilizzati e parlare brevemente di come questi schemi ed algoritmi vengono utilizzati nella realt`a per l’animazione 3D.
Testing the structural and cross-cultural validity of the KIDSCREEN-27 quality of life questionnaire
Resumo:
OBJECTIVES: The aim of this study is to assess the structural and cross-cultural validity of the KIDSCREEN-27 questionnaire. METHODS: The 27-item version of the KIDSCREEN instrument was derived from a longer 52-item version and was administered to young people aged 8-18 years in 13 European countries in a cross-sectional survey. Structural and cross-cultural validity were tested using multitrait multi-item analysis, exploratory and confirmatory factor analysis, and Rasch analyses. Zumbo's logistic regression method was applied to assess differential item functioning (DIF) across countries. Reliability was assessed using Cronbach's alpha. RESULTS: Responses were obtained from n = 22,827 respondents (response rate 68.9%). For the combined sample from all countries, exploratory factor analysis with procrustean rotations revealed a five-factor structure which explained 56.9% of the variance. Confirmatory factor analysis indicated an acceptable model fit (RMSEA = 0.068, CFI = 0.960). The unidimensionality of all dimensions was confirmed (INFIT: 0.81-1.15). Differential item functioning (DIF) results across the 13 countries showed that 5 items presented uniform DIF whereas 10 displayed non-uniform DIF. Reliability was acceptable (Cronbach's alpha = 0.78-0.84 for individual dimensions). CONCLUSIONS: There was substantial evidence for the cross-cultural equivalence of the KIDSCREEN-27 across the countries studied and the factor structure was highly replicable in individual countries. Further research is needed to correct scores based on DIF results. The KIDSCREEN-27 is a new short and promising tool for use in clinical and epidemiological studies.
Resumo:
Light-frame wood buildings are widely built in the United States (U.S.). Natural hazards cause huge losses to light-frame wood construction. This study proposes methodologies and a framework to evaluate the performance and risk of light-frame wood construction. Performance-based engineering (PBE) aims to ensure that a building achieves the desired performance objectives when subjected to hazard loads. In this study, the collapse risk of a typical one-story light-frame wood building is determined using the Incremental Dynamic Analysis method. The collapse risks of buildings at four sites in the Eastern, Western, and Central regions of U.S. are evaluated. Various sources of uncertainties are considered in the collapse risk assessment so that the influence of uncertainties on the collapse risk of lightframe wood construction is evaluated. The collapse risks of the same building subjected to maximum considered earthquakes at different seismic zones are found to be non-uniform. In certain areas in the U.S., the snow accumulation is significant and causes huge economic losses and threatens life safety. Limited study has been performed to investigate the snow hazard when combined with a seismic hazard. A Filtered Poisson Process (FPP) model is developed in this study, overcoming the shortcomings of the typically used Bernoulli model. The FPP model is validated by comparing the simulation results to weather records obtained from the National Climatic Data Center. The FPP model is applied in the proposed framework to assess the risk of a light-frame wood building subjected to combined snow and earthquake loads. The snow accumulation has a significant influence on the seismic losses of the building. The Bernoulli snow model underestimates the seismic loss of buildings in areas with snow accumulation. An object-oriented framework is proposed in this study to performrisk assessment for lightframe wood construction. For home owners and stake holders, risks in terms of economic losses is much easier to understand than engineering parameters (e.g., inter story drift). The proposed framework is used in two applications. One is to assess the loss of the building subjected to mainshock-aftershock sequences. Aftershock and downtime costs are found to be important factors in the assessment of seismic losses. The framework is also applied to a wood building in the state of Washington to assess the loss of the building subjected to combined earthquake and snow loads. The proposed framework is proven to be an appropriate tool for risk assessment of buildings subjected to multiple hazards. Limitations and future works are also identified.
Resumo:
Four papers, written in collaboration with the author’s graduate school advisor, are presented. In the first paper, uniform and non-uniform Berry-Esseen (BE) bounds on the convergence to normality of a general class of nonlinear statistics are provided; novel applications to specific statistics, including the non-central Student’s, Pearson’s, and the non-central Hotelling’s, are also stated. In the second paper, a BE bound on the rate of convergence of the F-statistic used in testing hypotheses from a general linear model is given. The third paper considers the asymptotic relative efficiency (ARE) between the Pearson, Spearman, and Kendall correlation statistics; conditions sufficient to ensure that the Spearman and Kendall statistics are equally (asymptotically) efficient are provided, and several models are considered which illustrate the use of such conditions. Lastly, the fourth paper proves that, in the bivariate normal model, the ARE between any of these correlation statistics possesses certain monotonicity properties; quadratic lower and upper bounds on the ARE are stated as direct applications of such monotonicity patterns.
Resumo:
Skeletal muscle force evaluation is difficult to implement in a clinical setting. Muscle force is typically assessed through either manual muscle testing, isokinetic/isometric dynamometry, or electromyography (EMG). Manual muscle testing is a subjective evaluation of a patient’s ability to move voluntarily against gravity and to resist force applied by an examiner. Muscle testing using dynamometers adds accuracy by quantifying functional mechanical output of a limb. However, like manual muscle testing, dynamometry only provides estimates of the joint moment. EMG quantifies neuromuscular activation signals of individual muscles, and is used to infer muscle function. Despite the abundance of work performed to determine the degree to which EMG signals and muscle forces are related, the basic problem remains that EMG cannot provide a quantitative measurement of muscle force. Intramuscular pressure (IMP), the pressure applied by muscle fibers on interstitial fluid, has been considered as a correlate for muscle force. Numerous studies have shown that an approximately linear relationship exists between IMP and muscle force. A microsensor has recently been developed that is accurate, biocompatible, and appropriately sized for clinical use. While muscle force and pressure have been shown to be correlates, IMP has been shown to be non-uniform within the muscle. As it would not be practicable to experimentally evaluate how IMP is distributed, computational modeling may provide the means to fully evaluate IMP generation in muscles of various shapes and operating conditions. The work presented in this dissertation focuses on the development and validation of computational models of passive skeletal muscle and the evaluation of their performance for prediction of IMP. A transversly isotropic, hyperelastic, and nearly incompressible model will be evaluated along with a poroelastic model.
Resumo:
Mobile Mesh Network based In-Transit Visibility (MMN-ITV) system facilitates global real-time tracking capability for the logistics system. In-transit containers form a multi-hop mesh network to forward the tracking information to the nearby sinks, which further deliver the information to the remote control center via satellite. The fundamental challenge to the MMN-ITV system is the energy constraint of the battery-operated containers. Coupled with the unique mobility pattern, cross-MMN behavior, and the large-spanned area, it is necessary to investigate the energy-efficient communication of the MMN-ITV system thoroughly. First of all, this dissertation models the energy-efficient routing under the unique pattern of the cross-MMN behavior. A new modeling approach, pseudo-dynamic modeling approach, is proposed to measure the energy-efficiency of the routing methods in the presence of the cross-MMN behavior. With this approach, it could be identified that the shortest-path routing and the load-balanced routing is energy-efficient in mobile networks and static networks respectively. For the MMN-ITV system with both mobile and static MMNs, an energy-efficient routing method, energy-threshold routing, is proposed to achieve the best tradeoff between them. Secondly, due to the cross-MMN behavior, neighbor discovery is executed frequently to help the new containers join the MMN, hence, consumes similar amount of energy as that of the data communication. By exploiting the unique pattern of the cross-MMN behavior, this dissertation proposes energy-efficient neighbor discovery wakeup schedules to save up to 60% of the energy for neighbor discovery. Vehicular Ad Hoc Networks (VANETs)-based inter-vehicle communications is by now growingly believed to enhance traffic safety and transportation management with low cost. The end-to-end delay is critical for the time-sensitive safety applications in VANETs, and can be a decisive performance metric for VANETs. This dissertation presents a complete analytical model to evaluate the end-to-end delay against the transmission range and the packet arrival rate. This model illustrates a significant end-to-end delay increase from non-saturated networks to saturated networks. It hence suggests that the distributed power control and admission control protocols for VANETs should aim at improving the real-time capacity (the maximum packet generation rate without causing saturation), instead of the delay itself. Based on the above model, it could be determined that adopting uniform transmission range for every vehicle may hinder the delay performance improvement, since it does not allow the coexistence of the short path length and the low interference. Clusters are proposed to configure non-uniform transmission range for the vehicles. Analysis and simulation confirm that such configuration can enhance the real-time capacity. In addition, it provides an improved trade off between the end-to-end delay and the network capacity. A distributed clustering protocol with minimum message overhead is proposed, which achieves low convergence time.
Resumo:
Nanoparticles are fascinating where physical and optical properties are related to size. Highly controllable synthesis methods and nanoparticle assembly are essential [6] for highly innovative technological applications. Among nanoparticles, nonhomogeneous core-shell nanoparticles (CSnp) have new properties that arise when varying the relative dimensions of the core and the shell. This CSnp structure enables various optical resonances, and engineered energy barriers, in addition to the high charge to surface ratio. Assembly of homogeneous nanoparticles into functional structures has become ubiquitous in biosensors (i.e. optical labeling) [7, 8], nanocoatings [9-13], and electrical circuits [14, 15]. Limited nonhomogenous nanoparticle assembly has only been explored. Many conventional nanoparticle assembly methods exist, but this work explores dielectrophoresis (DEP) as a new method. DEP is particle polarization via non-uniform electric fields while suspended in conductive fluids. Most prior DEP efforts involve microscale particles. Prior work on core-shell nanoparticle assemblies and separately, nanoparticle characterizations with dielectrophoresis and electrorotation [2-5], did not systematically explore particle size, dielectric properties (permittivity and electrical conductivity), shell thickness, particle concentration, medium conductivity, and frequency. This work is the first, to the best of our knowledge, to systematically examine these dielectrophoretic properties for core-shell nanoparticles. Further, we conduct a parametric fitting to traditional core-shell models. These biocompatible core-shell nanoparticles were studied to fill a knowledge gap in the DEP field. Experimental results (chapter 5) first examine medium conductivity, size and shell material dependencies of dielectrophoretic behaviors of spherical CSnp into 2D and 3D particle-assemblies. Chitosan (amino sugar) and poly-L-lysine (amino acid, PLL) CSnp shell materials were custom synthesized around a hollow (gas) core by utilizing a phospholipid micelle around a volatile fluid templating for the shell material; this approach proves to be novel and distinct from conventional core-shell models wherein a conductive core is coated with an insulative shell. Experiments were conducted within a 100 nl chamber housing 100 um wide Ti/Au quadrapole electrodes spaced 25 um apart. Frequencies from 100kHz to 80MHz at fixed local field of 5Vpp were tested with 10-5 and 10-3 S/m medium conductivities for 25 seconds. Dielectrophoretic responses of ~220 and 340(or ~400) nm chitosan or PLL CSnp were compiled as a function of medium conductivity, size and shell material.
Resumo:
Space-based (satellite, scientific probe, space station, etc.) and millimeter – to – microscale (such as are used in high power electronics cooling, weapons cooling in aircraft, etc.) condensers and boilers are shear/pressure driven. They are of increasing interest to system engineers for thermal management because flow boilers and flow condensers offer both high fluid flow-rate-specific heat transfer capacity and very low thermal resistance between the fluid and the heat exchange surface, so large amounts of heat may be removed using reasonably-sized devices without the need for excessive temperature differences. However, flow stability issues and degradation of performance of shear/pressure driven condensers and boilers due to non-desirable flow morphology over large portions of their lengths have mostly prevented their use in these applications. This research is part of an ongoing investigation seeking to close the gap between science and engineering by analyzing two key innovations which could help address these problems. First, it is recommended that the condenser and boiler be operated in an innovative flow configuration which provides a non-participating core vapor stream to stabilize the annular flow regime throughout the device length, accomplished in an energy-efficient manner by means of ducted vapor re-circulation. This is demonstrated experimentally. Second, suitable pulsations applied to the vapor entering the condenser or boiler (from the re-circulating vapor stream) greatly reduce the thermal resistance of the already effective annular flow regime. For experiments reported here, application of pulsations increased time-averaged heat-flux up to 900 % at a location within the flow condenser and up to 200 % at a location within the flow boiler, measured at the heat-exchange surface. Traditional fully condensing flows, reported here for comparison purposes, show similar heat-flux enhancements due to imposed pulsations over a range of frequencies. Shear/pressure driven condensing and boiling flow experiments are carried out in horizontal mm-scale channels with heat exchange through the bottom surface. The sides and top of the flow channel are insulated. The fluid is FC-72 from 3M Corporation.
Resumo:
Central Switzerland lies tectonically in an intraplate area and recurrence rates of strong earthquakes exceed the time span covered by historic chronicles. However, many lakes are present in the area that act as natural seismographs: their continuous, datable and high-resolution sediment succession allows extension of the earthquake catalogue to pre-historic times. This study reviews and compiles available data sets and results from more than 10 years of lacustrine palaeoseismological research in lakes of northern and Central Switzerland. The concept of using lacustrine mass-movement event stratigraphy to identify palaeo-earthquakes is showcased by presenting new data and results from Lake Zurich. The Late Glacial to Holocene mass-movement units in this lake document a complex history of varying tectonic and environmental impacts. Results include sedimentary evidence of three major and three minor, simultaneously triggered basin-wide lateral slope failure events interpreted as the fingerprints of palaeoseismic activity. A refined earthquake catalogue, which includes results from previous lake studies, reveals a non-uniform temporal distribution of earthquakes in northern and Central Switzerland. A higher frequency of earthquakes in the Late Glacial and Late Holocene period documents two different phases of neotectonic activity; they are interpreted to be related to isostatic post-glacial rebound and relatively recent (re-)activation of seismogenic zones, respectively. Magnitudes and epicentre reconstructions for the largest identified earthquakes provide evidence for two possible earthquake sources: (i) a source area in the region of the Alpine or Sub-Alpine Front due to release of accumulated north-west/south-east compressional stress related to an active basal thrust beneath the Aar massif; and (ii) a source area beneath the Alpine foreland due to reactivation of deep-seated strike-slip faults. Such activity has been repeatedly observed instrumentally, for example, during the most recent magnitude 4.2 and 3.5 earthquakes of February 2012, near Zug. The combined lacustrine record from northern and Central Switzerland indicates that at least one of these potential sources has been capable of producing magnitude 6.2 to 6.7 events in the past.