9 resultados para Array optimization
em Duke University
Resumo:
Electromagnetic metamaterials are artificially structured media typically composed of arrays of resonant electromagnetic circuits, the dimension and spacing of which are considerably smaller than the free-space wavelengths of operation. The constitutive parameters for metamaterials, which can be obtained using full-wave simulations in conjunction with numerical retrieval algorithms, exhibit artifacts related to the finite size of the metamaterial cell relative to the wavelength. Liu showed that the complicated, frequency-dependent forms of the constitutive parameters can be described by a set of relatively simple analytical expressions. These expressions provide useful insight and can serve as the basis for more intelligent interpolation or optimization schemes. Here, we show that the same analytical expressions can be obtained using a transfer-matrix formalism applied to a one-dimensional periodic array of thin, resonant, dielectric, or magnetic sheets. The transfer-matrix formalism breaks down, however, when both electric and magnetic responses are present in the same unit cell, as it neglects the magnetoelectric coupling between unit cells. We show that an alternative analytical approach based on the same physical model must be applied for such structures. Furthermore, in addition to the intercell coupling, electric and magnetic resonators within a unit cell may also exhibit magnetoelectric coupling. For such cells, we find an analytical expression for the effective index, which displays markedly characteristic dispersion features that depend on the strength of the coupling coefficient. We illustrate the applicability of the derived expressions by comparing to full-wave simulations on magnetoelectric unit cells. We conclude that the design of metamaterials with tailored simultaneous electric and magnetic response-such as negative index materials-will generally be complicated by potentially unwanted magnetoelectric coupling. © 2010 The American Physical Society.
Resumo:
In this paper, we propose a framework for robust optimization that relaxes the standard notion of robustness by allowing the decision maker to vary the protection level in a smooth way across the uncertainty set. We apply our approach to the problem of maximizing the expected value of a payoff function when the underlying distribution is ambiguous and therefore robustness is relevant. Our primary objective is to develop this framework and relate it to the standard notion of robustness, which deals with only a single guarantee across one uncertainty set. First, we show that our approach connects closely to the theory of convex risk measures. We show that the complexity of this approach is equivalent to that of solving a small number of standard robust problems. We then investigate the conservatism benefits and downside probability guarantees implied by this approach and compare to the standard robust approach. Finally, we illustrate theme thodology on an asset allocation example consisting of historical market data over a 25-year investment horizon and find in every case we explore that relaxing standard robustness with soft robustness yields a seemingly favorable risk-return trade-off: each case results in a higher out-of-sample expected return for a relatively minor degradation of out-of-sample downside performance. © 2010 INFORMS.
Resumo:
BACKGROUND AND PURPOSE: Previous studies have demonstrated that treatment strategy plays a critical role in ensuring maximum stone fragmentation during shockwave lithotripsy (SWL). We aimed to develop an optimal treatment strategy in SWL to produce maximum stone fragmentation. MATERIALS AND METHODS: Four treatment strategies were evaluated using an in-vitro experimental setup that mimics stone fragmentation in the renal pelvis. Spherical stone phantoms were exposed to 2100 shocks using the Siemens Modularis (electromagnetic) lithotripter. The treatment strategies included increasing output voltage with 100 shocks at 12.3 kV, 400 shocks at 14.8 kV, and 1600 shocks at 15.8 kV, and decreasing output voltage with 1600 shocks at 15.8 kV, 400 shocks at 14.8 kV, and 100 shocks at 12.3 kV. Both increasing and decreasing voltages models were run at a pulse repetition frequency (PRF) of 1 and 2 Hz. Fragmentation efficiency was determined using a sequential sieving method to isolate fragments less than 2 mm. A fiberoptic probe hydrophone was used to characterize the pressure waveforms at different output voltage and frequency settings. In addition, a high-speed camera was used to assess cavitation activity in the lithotripter field that was produced by different treatment strategies. RESULTS: The increasing output voltage strategy at 1 Hz PRF produced the best stone fragmentation efficiency. This result was significantly better than the decreasing voltage strategy at 1 Hz PFR (85.8% vs 80.8%, P=0.017) and over the same strategy at 2 Hz PRF (85.8% vs 79.59%, P=0.0078). CONCLUSIONS: A pretreatment dose of 100 low-voltage output shockwaves (SWs) at 60 SWs/min before increasing to a higher voltage output produces the best overall stone fragmentation in vitro. These findings could lead to increased fragmentation efficiency in vivo and higher success rates clinically.
Resumo:
An enterprise information system (EIS) is an integrated data-applications platform characterized by diverse, heterogeneous, and distributed data sources. For many enterprises, a number of business processes still depend heavily on static rule-based methods and extensive human expertise. Enterprises are faced with the need for optimizing operation scheduling, improving resource utilization, discovering useful knowledge, and making data-driven decisions.
This thesis research is focused on real-time optimization and knowledge discovery that addresses workflow optimization, resource allocation, as well as data-driven predictions of process-execution times, order fulfillment, and enterprise service-level performance. In contrast to prior work on data analytics techniques for enterprise performance optimization, the emphasis here is on realizing scalable and real-time enterprise intelligence based on a combination of heterogeneous system simulation, combinatorial optimization, machine-learning algorithms, and statistical methods.
On-demand digital-print service is a representative enterprise requiring a powerful EIS.We use real-life data from Reischling Press, Inc. (RPI), a digit-print-service provider (PSP), to evaluate our optimization algorithms.
In order to handle the increase in volume and diversity of demands, we first present a high-performance, scalable, and real-time production scheduling algorithm for production automation based on an incremental genetic algorithm (IGA). The objective of this algorithm is to optimize the order dispatching sequence and balance resource utilization. Compared to prior work, this solution is scalable for a high volume of orders and it provides fast scheduling solutions for orders that require complex fulfillment procedures. Experimental results highlight its potential benefit in reducing production inefficiencies and enhancing the productivity of an enterprise.
We next discuss analysis and prediction of different attributes involved in hierarchical components of an enterprise. We start from a study of the fundamental processes related to real-time prediction. Our process-execution time and process status prediction models integrate statistical methods with machine-learning algorithms. In addition to improved prediction accuracy compared to stand-alone machine-learning algorithms, it also performs a probabilistic estimation of the predicted status. An order generally consists of multiple series and parallel processes. We next introduce an order-fulfillment prediction model that combines advantages of multiple classification models by incorporating flexible decision-integration mechanisms. Experimental results show that adopting due dates recommended by the model can significantly reduce enterprise late-delivery ratio. Finally, we investigate service-level attributes that reflect the overall performance of an enterprise. We analyze and decompose time-series data into different components according to their hierarchical periodic nature, perform correlation analysis,
and develop univariate prediction models for each component as well as multivariate models for correlated components. Predictions for the original time series are aggregated from the predictions of its components. In addition to a significant increase in mid-term prediction accuracy, this distributed modeling strategy also improves short-term time-series prediction accuracy.
In summary, this thesis research has led to a set of characterization, optimization, and prediction tools for an EIS to derive insightful knowledge from data and use them as guidance for production management. It is expected to provide solutions for enterprises to increase reconfigurability, accomplish more automated procedures, and obtain data-driven recommendations or effective decisions.
Resumo:
The short arms of the ten acrocentric human chromosomes share several repetitive DNAs, including ribosomal RNA genes (rDNA). The rDNA arrays correspond to nucleolar organizing regions that coalesce each cell cycle to form the nucleolus. Telomere disruption by expressing a mutant version of telomere binding protein TRF2 (dnTRF2) causes non-random acrocentric fusions, as well as large-scale nucleolar defects. The mechanisms responsible for acrocentric chromosome sensitivity to dysfunctional telomeres are unclear. In this study, we show that TRF2 normally associates with the nucleolus and rDNA. However, when telomeres are crippled by dnTRF2 or RNAi knockdown of TRF2, gross nucleolar and chromosomal changes occur. We used the controllable dnTRF2 system to precisely dissect the timing and progression of nucleolar and chromosomal instability induced by telomere dysfunction, demonstrating that nucleolar changes precede the DNA damage and morphological changes that occur at acrocentric short arms. The rDNA repeat arrays on the short arms decondense, and are coated by RNA polymerase I transcription binding factor UBF, physically linking acrocentrics to one another as they become fusogenic. These results highlight the importance of telomere function in nucleolar stability and structural integrity of acrocentric chromosomes, particularly the rDNA arrays. Telomeric stress is widely accepted to cause DNA damage at chromosome ends, but our findings suggest that it also disrupts chromosome structure beyond the telomere region, specifically within the rDNA arrays located on acrocentric chromosomes. These results have relevance for Robertsonian translocation formation in humans and mechanisms by which acrocentric-acrocentric fusions are promoted by DNA damage and repair.
Resumo:
Our long-term goal is the detection and characterization of vulnerable plaque in the coronary arteries of the heart using intravascular ultrasound (IVUS) catheters. Vulnerable plaque, characterized by a thin fibrous cap and a soft, lipid-rich necrotic core is a precursor to heart attack and stroke. Early detection of such plaques may potentially alter the course of treatment of the patient to prevent ischemic events. We have previously described the characterization of carotid plaques using external linear arrays operating at 9 MHz. In addition, we previously modified circular array IVUS catheters by short-circuiting several neighboring elements to produce fixed beamwidths for intravascular hyperthermia applications. In this paper, we modified Volcano Visions 8.2 French, 9 MHz catheters and Volcano Platinum 3.5 French, 20 MHz catheters by short-circuiting portions of the array for acoustic radiation force impulse imaging (ARFI) applications. The catheters had an effective transmit aperture size of 2 mm and 1.5 mm, respectively. The catheters were connected to a Verasonics scanner and driven with pushing pulses of 180 V p-p to acquire ARFI data from a soft gel phantom with a Young's modulus of 2.9 kPa. The dynamic response of the tissue-mimicking material demonstrates a typical ARFI motion of 1 to 2 microns as the gel phantom displaces away and recovers back to its normal position. The hardware modifications applied to our IVUS catheters mimic potential beamforming modifications that could be implemented on IVUS scanners. Our results demonstrate that the generation of radiation force from IVUS catheters and the development of intravascular ARFI may be feasible.
Resumo:
Determination of copy number variants (CNVs) inferred in genome wide single nucleotide polymorphism arrays has shown increasing utility in genetic variant disease associations. Several CNV detection methods are available, but differences in CNV call thresholds and characteristics exist. We evaluated the relative performance of seven methods: circular binary segmentation, CNVFinder, cnvPartition, gain and loss of DNA, Nexus algorithms, PennCNV and QuantiSNP. Tested data included real and simulated Illumina HumHap 550 data from the Singapore cohort study of the risk factors for Myopia (SCORM) and simulated data from Affymetrix 6.0 and platform-independent distributions. The normalized singleton ratio (NSR) is proposed as a metric for parameter optimization before enacting full analysis. We used 10 SCORM samples for optimizing parameter settings for each method and then evaluated method performance at optimal parameters using 100 SCORM samples. The statistical power, false positive rates, and receiver operating characteristic (ROC) curve residuals were evaluated by simulation studies. Optimal parameters, as determined by NSR and ROC curve residuals, were consistent across datasets. QuantiSNP outperformed other methods based on ROC curve residuals over most datasets. Nexus Rank and SNPRank have low specificity and high power. Nexus Rank calls oversized CNVs. PennCNV detects one of the fewest numbers of CNVs.
Resumo:
CONCLUSION Radiation dose reduction, while saving image quality could be easily implemented with this approach. Furthermore, the availability of a dosimetric data archive provides immediate feedbacks, related to the implemented optimization strategies. Background JCI Standards and European Legislation (EURATOM 59/2013) require the implementation of patient radiation protection programs in diagnostic radiology. Aim of this study is to demonstrate the possibility to reduce patients radiation exposure without decreasing image quality, through a multidisciplinary team (MT), which analyzes dosimetric data of diagnostic examinations. Evaluation Data from CT examinations performed with two different scanners (Siemens DefinitionTM and GE LightSpeed UltraTM) between November and December 2013 are considered. CT scanners are configured to automatically send images to DoseWatch© software, which is able to store output parameters (e.g. kVp, mAs, pitch ) and exposure data (e.g. CTDIvol, DLP, SSDE). Data are analyzed and discussed by a MT composed by Medical Physicists and Radiologists, to identify protocols which show critical dosimetric values, then suggest possible improvement actions to be implemented. Furthermore, the large amount of data available allows to monitor diagnostic protocols currently in use and to identify different statistic populations for each of them. Discussion We identified critical values of average CTDIvol for head and facial bones examinations (respectively 61.8 mGy, 151 scans; 61.6 mGy, 72 scans), performed with the GE LightSpeed CTTM. Statistic analysis allowed us to identify the presence of two different populations for head scan, one of which was only 10% of the total number of scans and corresponded to lower exposure values. The MT adopted this protocol as standard. Moreover, the constant output parameters monitoring allowed us to identify unusual values in facial bones exams, due to changes during maintenance service, which the team promptly suggested to correct. This resulted in a substantial dose saving in CTDIvol average values of approximately 15% and 50% for head and facial bones exams, respectively. Diagnostic image quality was deemed suitable for clinical use by radiologists.