987 resultados para Double point curve


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper deals with determining points of zero charge of natural and Na+-saturated mineral kaolinites using two methods: (1) acid-base potentiometric titration was employed to obtain the adsorption of H+ and OH- on amphoteric surfaces in solutions of varying ionic strengths in order to determinate graphically the point of zero net proton charge (PZNPC) defined equally as point of zero salt effect (PZSE); (2) mass titration curve at different electrolyte concentrations in order to estimate PZNPCs by interpolation and to compare with those determined by potentiometric titrations. The two methods involved points of zero charge approximately similar for the two kaolinites between 6.5-7.8, comparable to those reported previously and were in the range expected for these clay minerals. The comparison of potentiometric surface titration curves obtained at 25 °C and those published in the literature reveals significant discrepancies both in the shape and in the pH of PZNPCs values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

the salt ritration metod was evaluated as a method to determine zpc in comparison with the potentiometric titration method for 26 soil with variable charge clays,i.e.,Oxisols and Ultisols from Thailand and Andisols from Japan. In addition to the determination of ST-pH0 as the zero point of charge, a calculation procedure was adopted here in order to acquire more information from the titration curve . fuithermore, for the purpose of cross-checking of zpc determined by the pt method, the st procedure was successively applied to the samples analyzed by the pt method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The efficient cleavage of plasmid DNA ( pCAT) by binuclear lanthanide complexes was investigated. At 37 degrees C and neutral pH, both Ho23+L and Er23+L promoted 100% conversion of supercoiled plasmid to the nicked circular form and linear form in 1 h. The corresponding saturation kinetics curve of cleavage of pCAT plasmid by binuclear lanthanide complexes showed the expected increase with catalyst concentration. (C) 1999 Elsevier Science S.A. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Despite numerous studies on endotracheal tube cuff pressure (CP) management, the literature has yet to establish a technique capable of adequately tilling the cuff with an appropriate volume of air while generating low CP in a less subjective way. the purpose of this prospective study was to evaluate and compare the CP levels and air volume required to fill the endotracheal tubes cuff using 2 different techniques (volume-time curve versus minimal occlusive volume) in the immediate postoperative period after coronary artery bypass grafting. METHODS: A total of 267 subjects were analyzed. After the surgery, the lungs were ventilated using pressure controlled continuous mandatory ventilation, and the same ventilatory parameters were adjusted. Upon arrival in the ICU, the cuff was completely deflated and re-inflated, and at this point the volume of air to fill the cuff was adjusted using one of 2 randomly selected techniques: volume-time curve and minimal occlusive volume. We measured the volume of air injected into the cuff, the CP, and the expired tidal volume of the mechanical ventilation after the application of each technique. RESULTS: the volume-time curve technique demonstrated a significantly lower CP and a lower volume of air injected into the cuff, compared to the minimal occlusive volume technique (P < .001). No significant difference was observed in the expired tidal volume between the 2 techniques (P = .052). However, when the subjects were submitted to the minimal occlusive volume technique, 17% (n = 47) experienced air leakage as observed by the volume-time graph. CONCLUSIONS: the volume-time curve technique was associated with a lower CP and a lower volume of air injected into the cuff, when compared to the minimal occlusive volume technique in the immediate postoperative period after coronary artery bypass grafting. Therefore, the volume-time curve may be a more reliable alternative for endotracheal tube cuff management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Previous investigation showed that the volume-time curve technique could be an alternative for endotracheal tube (ETT) cuff management. However, the clinical impact of the volume-time curve application has not been documented. the purpose of this study was to compare the occurrence and intensity of a sore throat, cough, thoracic pain, and pulmonary function between these 2 techniques for ETT cuff management: volume-time curve technique versus minimal occlusive volume (MOV) technique after coronary artery bypass grafting. METHODS: A total of 450 subjects were randomized into 2 groups for cuff management after intubation: MOV group (n = 222) and volume-time curve group (n = 228). We measured cuff pressure before extubation. We performed spirometry 24 h before and after surgery. We graded sore throat and cough according to a 4-point scale at 1, 24, 72, and 120 h after extubation and assessed thoracic pain at 24 h after extubation and quantified the level of pain by a 10-point scale. RESULTS: the volume-time curve group presented significantly lower cuff pressure (30.9 +/- 2.8 vs 37.7 +/- 3.4 cm H2O), less incidence and intensity of sore throat (1 h, 23.7 vs 51.4%; and 24 h, 18.9 vs 40.5%, P < .001), cough (1 h, 19.3 vs 48.6%; and 24 h, 18.4 vs 42.3%, P < .001), thoracic pain (5.2 +/- 1.8 vs 7.1 +/- 1.7), better preservation of FVC (49.5 +/- 9.9 vs 41.8 +/- 12.9%, P = .005), and FEV1, (46.6 +/- 1.8 vs 38.6 +/- 1.4%, P = .005) compared with the MOV group. CONCLUSIONS: the subjects who received the volume-time curve technique for ETT cuff management presented a significantly lower incidence and severity of sore throat and cough, less thoracic pain, and minimally impaired pulmonary function than those subjects who received the MOV technique during the first 24 h after coronary artery bypass grafting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the rapid growth of the Internet and digital communications, the volume of sensitive electronic transactions being transferred and stored over and on insecure media has increased dramatically in recent years. The growing demand for cryptographic systems to secure this data, across a multitude of platforms, ranging from large servers to small mobile devices and smart cards, has necessitated research into low cost, flexible and secure solutions. As constraints on architectures such as area, speed and power become key factors in choosing a cryptosystem, methods for speeding up the development and evaluation process are necessary. This thesis investigates flexible hardware architectures for the main components of a cryptographic system. Dedicated hardware accelerators can provide significant performance improvements when compared to implementations on general purpose processors. Each of the designs proposed are analysed in terms of speed, area, power, energy and efficiency. Field Programmable Gate Arrays (FPGAs) are chosen as the development platform due to their fast development time and reconfigurable nature. Firstly, a reconfigurable architecture for performing elliptic curve point scalar multiplication on an FPGA is presented. Elliptic curve cryptography is one such method to secure data, offering similar security levels to traditional systems, such as RSA, but with smaller key sizes, translating into lower memory and bandwidth requirements. The architecture is implemented using different underlying algorithms and coordinates for dedicated Double-and-Add algorithms, twisted Edwards algorithms and SPA secure algorithms, and its power consumption and energy on an FPGA measured. Hardware implementation results for these new algorithms are compared against their software counterparts and the best choices for minimum area-time and area-energy circuits are then identified and examined for larger key and field sizes. Secondly, implementation methods for another component of a cryptographic system, namely hash functions, developed in the recently concluded SHA-3 hash competition are presented. Various designs from the three rounds of the NIST run competition are implemented on FPGA along with an interface to allow fair comparison of the different hash functions when operating in a standardised and constrained environment. Different methods of implementation for the designs and their subsequent performance is examined in terms of throughput, area and energy costs using various constraint metrics. Comparing many different implementation methods and algorithms is nontrivial. Another aim of this thesis is the development of generic interfaces used both to reduce implementation and test time and also to enable fair baseline comparisons of different algorithms when operating in a standardised and constrained environment. Finally, a hardware-software co-design cryptographic architecture is presented. This architecture is capable of supporting multiple types of cryptographic algorithms and is described through an application for performing public key cryptography, namely the Elliptic Curve Digital Signature Algorithm (ECDSA). This architecture makes use of the elliptic curve architecture and the hash functions described previously. These components, along with a random number generator, provide hardware acceleration for a Microblaze based cryptographic system. The trade-off in terms of performance for flexibility is discussed using dedicated software, and hardware-software co-design implementations of the elliptic curve point scalar multiplication block. Results are then presented in terms of the overall cryptographic system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To ensure genomic integrity, dividing cells implement multiple checkpoint pathways during the course of the cell cycle. In response to DNA damage, cells may either halt the progression of the cycle (cell cycle arrest) or undergo apoptosis. This choice depends on the extent of damage and the cell's capacity for DNA repair. Cell cycle arrest induced by double-stranded DNA breaks relies on the activation of the ataxia-telangiectasia (ATM) protein kinase, which phosphorylates cell cycle effectors (e.g., Chk2 and p53) to inhibit cell cycle progression. ATM is an S/T-Q directed kinase that is critical for the cellular response to double-stranded DNA breaks. Following DNA damage, ATM is activated and recruited to sites of DNA damage by the MRN protein complex (Mre11-Rad50-Nbs1 proteins) where ATM phosphorylates multiple substrates to trigger a cell cycle arrest. In cancer cells, this regulation may be faulty and cell division may proceed even in the presence of damaged DNA. We show here that the RSK kinase, often elevated in cancers, can suppress DSB-induced ATM activation in both Xenopus egg extracts and human tumor cell lines. In analyzing each step in ATM activation, we have found that RSK disrupts the binding of the MRN complex to DSB DNA. RSK can directly phosphorylate the Mre11 protein at Ser 676 both in vitro and in intact cells and can thereby inhibit loading of Mre11 onto DSB DNA. Accordingly, mutation of Ser 676 to Ala can reverse inhibition of the DSB response by RSK. Collectively, these data point to Mre11 as an important locus of RSK-mediated checkpoint inhibition acting upstream of ATM activation.

The phosphorylation of Mre11 on Ser 676 is antagonized by phosphatases. Here, we screened for phosphatases that target this site and identified PP5 as a candidate. This finding is consistent with the fact that PP5 is required for the ATM-mediated DNA damage response, indicating that PP5 may promote DSB-induced, ATM-dependent DNA damage response by targeting Mre11 upstream of ATM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Histopathology is the clinical standard for tissue diagnosis. However, histopathology has several limitations including that it requires tissue processing, which can take 30 minutes or more, and requires a highly trained pathologist to diagnose the tissue. Additionally, the diagnosis is qualitative, and the lack of quantitation leads to possible observer-specific diagnosis. Taken together, it is difficult to diagnose tissue at the point of care using histopathology.

Several clinical situations could benefit from more rapid and automated histological processing, which could reduce the time and the number of steps required between obtaining a fresh tissue specimen and rendering a diagnosis. For example, there is need for rapid detection of residual cancer on the surface of tumor resection specimens during excisional surgeries, which is known as intraoperative tumor margin assessment. Additionally, rapid assessment of biopsy specimens at the point-of-care could enable clinicians to confirm that a suspicious lesion is successfully sampled, thus preventing an unnecessary repeat biopsy procedure. Rapid and low cost histological processing could also be potentially useful in settings lacking the human resources and equipment necessary to perform standard histologic assessment. Lastly, automated interpretation of tissue samples could potentially reduce inter-observer error, particularly in the diagnosis of borderline lesions.

To address these needs, high quality microscopic images of the tissue must be obtained in rapid timeframes, in order for a pathologic assessment to be useful for guiding the intervention. Optical microscopy is a powerful technique to obtain high-resolution images of tissue morphology in real-time at the point of care, without the need for tissue processing. In particular, a number of groups have combined fluorescence microscopy with vital fluorescent stains to visualize micro-anatomical features of thick (i.e. unsectioned or unprocessed) tissue. However, robust methods for segmentation and quantitative analysis of heterogeneous images are essential to enable automated diagnosis. Thus, the goal of this work was to obtain high resolution imaging of tissue morphology through employing fluorescence microscopy and vital fluorescent stains and to develop a quantitative strategy to segment and quantify tissue features in heterogeneous images, such as nuclei and the surrounding stroma, which will enable automated diagnosis of thick tissues.

To achieve these goals, three specific aims were proposed. The first aim was to develop an image processing method that can differentiate nuclei from background tissue heterogeneity and enable automated diagnosis of thick tissue at the point of care. A computational technique called sparse component analysis (SCA) was adapted to isolate features of interest, such as nuclei, from the background. SCA has been used previously in the image processing community for image compression, enhancement, and restoration, but has never been applied to separate distinct tissue types in a heterogeneous image. In combination with a high resolution fluorescence microendoscope (HRME) and a contrast agent acriflavine, the utility of this technique was demonstrated through imaging preclinical sarcoma tumor margins. Acriflavine localizes to the nuclei of cells where it reversibly associates with RNA and DNA. Additionally, acriflavine shows some affinity for collagen and muscle. SCA was adapted to isolate acriflavine positive features or APFs (which correspond to RNA and DNA) from background tissue heterogeneity. The circle transform (CT) was applied to the SCA output to quantify the size and density of overlapping APFs. The sensitivity of the SCA+CT approach to variations in APF size, density and background heterogeneity was demonstrated through simulations. Specifically, SCA+CT achieved the lowest errors for higher contrast ratios and larger APF sizes. When applied to tissue images of excised sarcoma margins, SCA+CT correctly isolated APFs and showed consistently increased density in tumor and tumor + muscle images compared to images containing muscle. Next, variables were quantified from images of resected primary sarcomas and used to optimize a multivariate model. The sensitivity and specificity for differentiating positive from negative ex vivo resected tumor margins was 82% and 75%. The utility of this approach was further tested by imaging the in vivo tumor cavities from 34 mice after resection of a sarcoma with local recurrence as a bench mark. When applied prospectively to images from the tumor cavity, the sensitivity and specificity for differentiating local recurrence was 78% and 82%. The results indicate that SCA+CT can accurately delineate APFs in heterogeneous tissue, which is essential to enable automated and rapid surveillance of tissue pathology.

Two primary challenges were identified in the work in aim 1. First, while SCA can be used to isolate features, such as APFs, from heterogeneous images, its performance is limited by the contrast between APFs and the background. Second, while it is feasible to create mosaics by scanning a sarcoma tumor bed in a mouse, which is on the order of 3-7 mm in any one dimension, it is not feasible to evaluate an entire human surgical margin. Thus, improvements to the microscopic imaging system were made to (1) improve image contrast through rejecting out-of-focus background fluorescence and to (2) increase the field of view (FOV) while maintaining the sub-cellular resolution needed for delineation of nuclei. To address these challenges, a technique called structured illumination microscopy (SIM) was employed in which the entire FOV is illuminated with a defined spatial pattern rather than scanning a focal spot, such as in confocal microscopy.

Thus, the second aim was to improve image contrast and increase the FOV through employing wide-field, non-contact structured illumination microscopy and optimize the segmentation algorithm for new imaging modality. Both image contrast and FOV were increased through the development of a wide-field fluorescence SIM system. Clear improvement in image contrast was seen in structured illumination images compared to uniform illumination images. Additionally, the FOV is over 13X larger than the fluorescence microendoscope used in aim 1. Initial segmentation results of SIM images revealed that SCA is unable to segment large numbers of APFs in the tumor images. Because the FOV of the SIM system is over 13X larger than the FOV of the fluorescence microendoscope, dense collections of APFs commonly seen in tumor images could no longer be sparsely represented, and the fundamental sparsity assumption associated with SCA was no longer met. Thus, an algorithm called maximally stable extremal regions (MSER) was investigated as an alternative approach for APF segmentation in SIM images. MSER was able to accurately segment large numbers of APFs in SIM images of tumor tissue. In addition to optimizing MSER for SIM image segmentation, an optimal frequency of the illumination pattern used in SIM was carefully selected because the image signal to noise ratio (SNR) is dependent on the grid frequency. A grid frequency of 31.7 mm-1 led to the highest SNR and lowest percent error associated with MSER segmentation.

Once MSER was optimized for SIM image segmentation and the optimal grid frequency was selected, a quantitative model was developed to diagnose mouse sarcoma tumor margins that were imaged ex vivo with SIM. Tumor margins were stained with acridine orange (AO) in aim 2 because AO was found to stain the sarcoma tissue more brightly than acriflavine. Both acriflavine and AO are intravital dyes, which have been shown to stain nuclei, skeletal muscle, and collagenous stroma. A tissue-type classification model was developed to differentiate localized regions (75x75 µm) of tumor from skeletal muscle and adipose tissue based on the MSER segmentation output. Specifically, a logistic regression model was used to classify each localized region. The logistic regression model yielded an output in terms of probability (0-100%) that tumor was located within each 75x75 µm region. The model performance was tested using a receiver operator characteristic (ROC) curve analysis that revealed 77% sensitivity and 81% specificity. For margin classification, the whole margin image was divided into localized regions and this tissue-type classification model was applied. In a subset of 6 margins (3 negative, 3 positive), it was shown that with a tumor probability threshold of 50%, 8% of all regions from negative margins exceeded this threshold, while over 17% of all regions exceeded the threshold in the positive margins. Thus, 8% of regions in negative margins were considered false positives. These false positive regions are likely due to the high density of APFs present in normal tissues, which clearly demonstrates a challenge in implementing this automatic algorithm based on AO staining alone.

Thus, the third aim was to improve the specificity of the diagnostic model through leveraging other sources of contrast. Modifications were made to the SIM system to enable fluorescence imaging at a variety of wavelengths. Specifically, the SIM system was modified to enabling imaging of red fluorescent protein (RFP) expressing sarcomas, which were used to delineate the location of tumor cells within each image. Initial analysis of AO stained panels confirmed that there was room for improvement in tumor detection, particularly in regards to false positive regions that were negative for RFP. One approach for improving the specificity of the diagnostic model was to investigate using a fluorophore that was more specific to staining tumor. Specifically, tetracycline was selected because it appeared to specifically stain freshly excised tumor tissue in a matter of minutes, and was non-toxic and stable in solution. Results indicated that tetracycline staining has promise for increasing the specificity of tumor detection in SIM images of a preclinical sarcoma model and further investigation is warranted.

In conclusion, this work presents the development of a combination of tools that is capable of automated segmentation and quantification of micro-anatomical images of thick tissue. When compared to the fluorescence microendoscope, wide-field multispectral fluorescence SIM imaging provided improved image contrast, a larger FOV with comparable resolution, and the ability to image a variety of fluorophores. MSER was an appropriate and rapid approach to segment dense collections of APFs from wide-field SIM images. Variables that reflect the morphology of the tissue, such as the density, size, and shape of nuclei and nucleoli, can be used to automatically diagnose SIM images. The clinical utility of SIM imaging and MSER segmentation to detect microscopic residual disease has been demonstrated by imaging excised preclinical sarcoma margins. Ultimately, this work demonstrates that fluorescence imaging of tissue micro-anatomy combined with a specialized algorithm for delineation and quantification of features is a means for rapid, non-destructive and automated detection of microscopic disease, which could improve cancer management in a variety of clinical scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large nonlinear acoustic waves are discussed in a plasma made up of cold supersonic and adiabatic subsonic positive ions, in the presence of hot isothermal electrons, with the help of Sagdeev pseudopotential theory. In this model, no solitons are found at the acoustic speed, and no compositional parameter ranges exist where solutions of opposite polarities can coexist. All nonlinear modes are thus super-acoustic, but polarity changes are possible. The upper limits on admissible structure velocities come from different physical arguments, in a strict order when the fractional cool ion density is increased: infinite cold ion compression, warm ion sonic point, positive double layers, negative double layers, and finally, positive double layers again. However, not all ranges exist for all mass and temperature ratios. Whereas the cold and warm ion sonic point limitations are always present over a wide range of mass and temperature ratios, and thus positive polarity solutions can easily be obtained, double layers have a more restricted existence range, specially if polarity changes are sought. (C) 2011 American Institute of Physics. [doi:10.1063/1.3579397]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The POINT-AGAPE (Pixel-lensing Observations with the Isaac Newton Telescope-Andromeda Galaxy Amplified Pixels Experiment) survey is an optical search for gravitational microlensing events towards the Andromeda galaxy (M31). As well as microlensing, the survey is sensitive to many different classes of variable stars and transients. Here we describe the automated detection and selection pipeline used to identify M31 classical novae (CNe) and we present the resulting catalogue of 20 CN candidates observed over three seasons. CNe are observed both in the bulge region as well as over a wide area of the M31 disc. Nine of the CNe are caught during the final rise phase and all are well sampled in at least two colours. The excellent light-curve coverage has allowed us to detect and classify CNe over a wide range of speed class, from very fast to very slow. Among the light curves is a moderately fast CN exhibiting entry into a deep transition minimum, followed by its final decline. We have also observed in detail a very slow CN which faded by only 0.01 mag d(-1) over a 150-d period. We detect other interesting variable objects, including one of the longest period and most luminous Mira variables. The CN catalogue constitutes a uniquely well-sampled and objectively-selected data set with which to study the statistical properties of CNe in M31, such as the global nova rate, the reliability of novae as standard-candle distance indicators and the dependence of the nova population on stellar environment. The findings of this statistical study will be reported in a follow-up paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

<p> The recollision model has been applied to separate the probability for double ionization into contributions from electron-impact ionization and electron-impact excitation for intensities at which the dielectronic interaction is important for generating double ionization. For a wavelength of 780 am, electron-impact excitation dominates just above the threshold intensity for double ionization, approximate to 1.2 x 10(14) W cm(-2), with electron-impact ionization becoming more important for higher intensities. For a wavelength of 390 nm, the ratio between electron-impact ionization and electron-impact excitation remains fairly constant for all intensities above the threshold intensity for double ionization, approximate to 6 x 10(14) W cm(-2). The results point to an explanation of the experimental results, but more detailed calculations on the behaviour of excited He+ ions are required.</p>

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Patients with castration-resistant prostate cancer (CRPC) and bone metastases have an unmet clinical need for effective treatments that improve quality of life and survival with a favorable safety profile. OBJECTIVE: To prospectively evaluate the efficacy and safety of three different doses of radium chloride (Ra 223) in patients with CRPC and bone metastases. DESIGN, SETTING, AND PARTICIPANTS: In this phase 2 double-blind multicenter study, 122 patients were randomized to receive three injections of Ra 223 at 6-wk intervals, at doses of 25 kBq/kg (n=41), 50 kBq/kg (n=39), or 80 kBq/kg (n=42). The study compared the proportion of patients in each dose group who had a confirmed decrease of =50% in baseline prostate-specific antigen (PSA) levels. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS: Efficacy was evaluated using blood samples to measure PSA and other tumor markers, recorded skeletal-related events, and pain assessments. Safety was evaluated using adverse events (AEs), physical examination, and clinical laboratory tests. The Jonckheere-Terpstra test assessed trends between groups. RESULTS AND LIMITATIONS: The study met its primary end point with a statistically significant dose-response relationship in confirmed =50% PSA declines for no patients (0%) in the 25-kBq/kg dose group, two patients (6%) in the 50-kBq/kg dose group, and five patients (13%) in the 80-kBq/kg dose group (p=0.0297). A =50% decrease in bone alkaline phosphatase levels was identified in six patients (16%), 24 patients (67%), and 25 patients (66%) in the 25-, 50-, and 80-kBq/kg dose groups, respectively (p

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A 64-point Fourier transform chip is described that performs a forward or inverse, 64-point Fourier transform on complex two's complement data supplied at a rate of 13.5MHz and can operate at clock rates of up to 40MHz, under worst-case conditions. It uses a 0.6µm double-level metal CMOS technology, contains 535k transistors and uses an internal 3.3V power supply. It has an area of 7.8×8mm, dissipates 0.9W, has 48 pins and is housed in a 84 pin PLCC plastic package. The chip is based on a FFT architecture developed from first principles through a detailed investigation of the structure of the relevant DFT matrix and through mapping repetitive blocks within this matrix onto a regular silicon structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel hardware architecture for elliptic curve cryptography (ECC) over GF(p) is introduced. This can perform the main prime field arithmetic functions needed in these cryptosystems including modular inversion and multiplication. This is based on a new unified modular inversion algorithm that offers considerable improvement over previous ECC techniques that use Fermat's Little Theorem for this operation. The processor described uses a full-word multiplier which requires much fewer clock cycles than previous methods, while still maintaining a competitive critical path delay. The benefits of the approach have been demonstrated by utilizing these techniques to create a field-programmable gate array (FPGA) design. This can perform a 256-bit prime field scalar point multiplication in 3.86 ms, the fastest FPGA time reported to date. The ECC architecture described can also perform four different types of modular inversion, making it suitable for use in many different ECC applications. © 2006 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The speed of manufacturing processes today depends on a trade-off between the physical processes of production, the wider system that allows these processes to operate and the co-ordination of a supply chain in the pursuit of meeting customer needs. Could the speed of this activity be doubled? This paper explores this hypothetical question, starting with examination of a diverse set of case studies spanning the activities of manufacturing. This reveals that the constraints on increasing manufacturing speed have some common themes, and several of these are examined in more detail, to identify absolute limits to performance. The physical processes of production are constrained by factors such as machine stiffness, actuator acceleration, heat transfer and the delivery of fluids, and for each of these, a simplified model is used to analyse the gap between current and limiting performance. The wider systems of production require the co-ordination of resources and push at the limits of human biophysical and cognitive limits. Evidence about these is explored and related to current practice. Out of this discussion, five promising innovations are explored to show examples of how manufacturing speed is increasing—with line arrays of point actuators, parallel tools, tailored application of precision, hybridisation and task taxonomies. The paper addresses a broad question which could be pursued by a wider community and in greater depth, but even this first examination suggests the possibility of unanticipated innovations in current manufacturing practices.