833 resultados para compression therapies
Resumo:
Cell-based therapies have the potential to make a large contribution toward currently unmet patient need and thus effective manufacture of these products is essential. Many challenges must be overcome before this can become a reality and a better definition of the manufacturing requirements for cell-based products must be obtained. The aim of this study is to inform industry and academia of current cell-based therapy clinical development and to identify gaps in their manufacturing requirements. A total of 1342 active cell-based therapy clinical trials have been identified and characterized based on cell type, target indication and trial phase. Multiple technologies have been assessed for the manufacture of these cell types in order to facilitate product translation and future process development.
Resumo:
The classification of types of information redundancy in symbolic and graphical forms representation of information is done. The general classification of compression technologies for graphical information is presented as well. The principles of design, tasks and variants for realizations of semantic compression technology of graphical information are suggested.
Resumo:
This paper presents a novel error-free (infinite-precision) architecture for the fast implementation of 8x8 2-D Discrete Cosine Transform. The architecture uses a new algebraic integer encoding of a 1-D radix-8 DCT that allows the separable computation of a 2-D 8x8 DCT without any intermediate number representation conversions. This is a considerable improvement on previously introduced algebraic integer encoding techniques to compute both DCT and IDCT which eliminates the requirements to approximate the transformation matrix ele- ments by obtaining their exact representations and hence mapping the transcendental functions without any errors. Apart from the multiplication-free nature, this new mapping scheme fits to this algorithm, eliminating any computational or quantization errors and resulting short-word-length and high-speed-design.
Resumo:
A multicore fibre (MCF) sensor to measure the radial deformation of a compliant cylinder under compression is presented. The sensor is connectorised and need not be permanently bonded to the test object. A differential measurement technique using FBGs written into the MCF makes the sensor temperature insensitive. FBG measurement of axial strain of a cylinder under compression is also reported.
Anisotropic characterization of crack growth in the tertiary flow of asphalt mixtures in compression
Resumo:
Asphalt mixtures exhibit primary, secondary, and tertiary stages in sequence during a rutting deterioration. Many field asphalt pavements are still in service even when the asphalt layer is in the tertiary stage, and rehabilitation is not performed until a significant amount of rutting accompanied by numerous macrocracks is observed. The objective of this study was to provide a mechanistic method to model the anisotropic cracking of the asphalt mixtures in compression during the tertiary stage of rutting. Laboratory tests including nondestructive and destructive tests were performed to obtain the viscoelastic and viscofracture properties of the asphalt mixtures. Each of the measured axial and radial total strains in the destructive tests were decomposed into elastic, plastic, viscoelastic, viscoplastic, and viscofracture strains using the pseudostrain method in an extended elastic-viscoelastic correspondence principle. The viscofracture strains are caused by the crack growth, which is primarily signaled by the increase of phase angle in the tertiary flow. The viscofracture properties are characterized using the anisotropic damage densities (i.e., the ratio of the lost area caused by cracks to the original total area in orthogonal directions). Using the decomposed axial and radial viscofracture strains, the axial and radial damage densities were determined by using a dissipated pseudostrain energy balance principle and a geometric analysis of the cracks, respectively. Anisotropic pseudo J-integral Paris' laws in terms of damage densities were used to characterize the evolution of the cracks in compression. The material constants in the Paris' law are determined and found to be highly correlated. These tests, analysis, and modeling were performed on different asphalt mixtures with two binders, two air void contents, and three aging periods. Consistent results were obtained; for instance, a stiffer asphalt mixture is demonstrated to have a higher modulus, a lower phase angle, a greater flow number, and a larger n1 value (exponent of Paris' law). The calculation of the orientation of cracks demonstrates that the asphalt mixture with 4% air voids has a brittle fracture and a splitting crack mode, whereas the asphalt mixture with 7% air voids tends to have a ductile fracture and a diagonal sliding crack mode. Cracks of the asphalt mixtures in compression are inclined to propagate along the direction of the external compressive load. © 2014 American Society of Civil Engineers.
Resumo:
The introduction of anti-vascular endothelial growth factor (anti-VEGF) has made significant impact on the reduction of the visual loss due to neovascular age-related macular degeneration (n-AMD). There are significant inter-individual differences in response to an anti-VEGF agent, made more complex by the availability of multiple anti-VEGF agents with different molecular configurations. The response to anti-VEGF therapy have been found to be dependent on a variety of factors including patient’s age, lesion characteristics, lesion duration, baseline visual acuity (VA) and the presence of particular genotype risk alleles. Furthermore, a proportion of eyes with n-AMD show a decline in acuity or morphology, despite therapy or require very frequent re-treatment. There is currently no consensus as to how to classify optimal response, or lack of it, with these therapies. There is, in particular, confusion over terms such as ‘responder status’ after treatment for n-AMD, ‘tachyphylaxis’ and ‘recalcitrant’ n-AMD. This document aims to provide a consensus on definition/categorisation of the response of n-AMD to anti-VEGF therapies and on the time points at which response to treatment should be determined. Primary response is best determined at 1 month following the last initiation dose, while maintained treatment (secondary) response is determined any time after the 4th visit. In a particular eye, secondary responses do not mirror and cannot be predicted from that in the primary phase. Morphological and functional responses to anti-VEGF treatments, do not necessarily correlate, and may be dissociated in an individual eye. Furthermore, there is a ceiling effect that can negate the currently used functional metrics such as >5 letters improvement when the baseline VA is good (ETDRS>70 letters). It is therefore important to use a combination of both the parameters in determining the response.The following are proposed definitions: optimal (good) response is defined as when there is resolution of fluid (intraretinal fluid; IRF, subretinal fluid; SRF and retinal thickening), and/or improvement of >5 letters, subject to the ceiling effect of good starting VA. Poor response is defined as <25% reduction from the baseline in the central retinal thickness (CRT), with persistent or new IRF, SRF or minimal or change in VA (that is, change in VA of 0+4 letters). Non-response is defined as an increase in fluid (IRF, SRF and CRT), or increasing haemorrhage compared with the baseline and/or loss of >5 letters compared with the baseline or best corrected vision subsequently. Poor or non-response to anti-VEGF may be due to clinical factors including suboptimal dosing than that required by a particular patient, increased dosing intervals, treatment initiation when disease is already at an advanced or chronic stage), cellular mechanisms, lesion type, genetic variation and potential tachyphylaxis); non-clinical factors including poor access to clinics or delayed appointments may also result in poor treatment outcomes. In eyes classified as good responders, treatment should be continued with the same agent when disease activity is present or reactivation occurs following temporary dose holding. In eyes that show partial response, treatment may be continued, although re-evaluation with further imaging may be required to exclude confounding factors. Where there is persistent, unchanging accumulated fluid following three consecutive injections at monthly intervals, treatment may be withheld temporarily, but recommenced with the same or alternative anti-VEGF if the fluid subsequently increases (lesion considered active). Poor or non-response to anti-VEGF treatments requires re-evaluation of diagnosis and if necessary switch to alternative therapies including other anti-VEGF agents and/or with photodynamic therapy (PDT). Idiopathic polypoidal choroidopathy may require treatment with PDT monotherapy or combination with anti-VEGF. A committee comprised of retinal specialists with experience of managing patients with n-AMD similar to that which developed the Royal College of Ophthalmologists Guidelines to Ranibizumab was assembled. Individual aspects of the guidelines were proposed by the committee lead (WMA) based on relevant reference to published evidence base following a search of Medline and circulated to all committee members for discussion before approval or modification. Each draft was modified according to feedback from committee members until unanimous approval was obtained in the final draft. A system for categorising the range of responsiveness of n-AMD lesions to anti-VEGF therapy is proposed. The proposal is based primarily on morphological criteria but functional criteria have been included. Recommendations have been made on when to consider discontinuation of therapy either because of success or futility. These guidelines should help clinical decision-making and may prevent over and/or undertreatment with anti-VEGF therapy.
Resumo:
Nous étudions numériquement le phénomène de compression spectrale se déroulant dans une fibre optique à dispersion normale. Les conditions conduisant à une impulsion en quasi-limite de Fourier sont déterminées et nous montrons que loin de dégrader les performances, la présence de dispersion normale permet une amélioration significative des résultats.
Resumo:
We present comprehensive design rules to optimize the process of spectral compression arising from nonlinear pulse propagation in an optical fiber. Extensive numerical simulations are used to predict the performance characteristics of the process as well as to identify the optimal operational conditions within the space of system parameters. It is shown that the group velocity dispersion of the fiber is not detrimental and, in fact, helps achieve optimum compression. We also demonstrate that near-transform-limited rectangular and parabolic pulses can be generated in the region of optimum compression.
Resumo:
Aims : Our aim was to investigate the proportional representation of people of South Asian origin in cardiovascular outcome trials of glucose-lowering drugs or strategies in Type 2 diabetes, noting that these are among the most significant pieces of evidence used to formulate the guidelines on which clinical practice is largely based. Methods : We searched for cardiovascular outcome trials in Type 2 diabetes published before January 2015, and extracted data on the ethnicity of participants. These were compared against expected values for proportional representation of South Asian individuals, based on population data from the USA, from the UK, and globally. Results : Twelve studies met our inclusion criteria and, of these, eight presented a sufficiently detailed breakdown of participant ethnicity to permit numerical analysis. In general, people of South Asian origin were found to be under-represented in trials compared with UK and global expectations and over-represented compared with US expectations. Among the eight trials for which South Asian representation could be reliably estimated, seven under-represented this group relative to the 11.2% of the UK diabetes population estimated to be South Asian, with the representation in these trials ranging from 0.0% to 10.0%. Conclusions : Clinicians should exercise caution when generalizing the results of trials to their own practice, with regard to the ethnicity of individuals. Efforts should be made to improve reporting of ethnicity and improve diversity in trial recruitment, although we acknowledge that there are challenges that must be overcome to make this a reality.
Resumo:
En exploitant une modulation de phase sinusoïdale temporelle additionnelle, nous montrons qu’il est possible d’améliorer significativement les performances d’une compression spectrale réalisée en régime de propagation hautement non-linéaire. Les simulations numériques indiquent ainsi une amélioration des facteurs de compression ainsi que du rapport de Strehl.
Resumo:
The contributions of this dissertation are in the development of two new interrelated approaches to video data compression: (1) A level-refined motion estimation and subband compensation method for the effective motion estimation and motion compensation. (2) A shift-invariant sub-decimation decomposition method in order to overcome the deficiency of the decimation process in estimating motion due to its shift-invariant property of wavelet transform. ^ The enormous data generated by digital videos call for an intense need of efficient video compression techniques to conserve storage space and minimize bandwidth utilization. The main idea of video compression is to reduce the interpixel redundancies inside and between the video frames by applying motion estimation and motion compensation (MEMO) in combination with spatial transform coding. To locate the global minimum of the matching criterion function reasonably, hierarchical motion estimation by coarse to fine resolution refinements using discrete wavelet transform is applied due to its intrinsic multiresolution and scalability natures. ^ Due to the fact that most of the energies are concentrated in the low resolution subbands while decreased in the high resolution subbands, a new approach called level-refined motion estimation and subband compensation (LRSC) method is proposed. It realizes the possible intrablocks in the subbands for lower entropy coding while keeping the low computational loads of motion estimation as the level-refined method, thus to achieve both temporal compression quality and computational simplicity. ^ Since circular convolution is applied in wavelet transform to obtain the decomposed subframes without coefficient expansion, symmetric-extended wavelet transform is designed on the finite length frame signals for more accurate motion estimation without discontinuous boundary distortions. ^ Although wavelet transformed coefficients still contain spatial domain information, motion estimation in wavelet domain is not as straightforward as in spatial domain due to the shift variance property of the decimation process of the wavelet transform. A new approach called sub-decimation decomposition method is proposed, which maintains the motion consistency between the original frame and the decomposed subframes, improving as a consequence the wavelet domain video compressions by shift invariant motion estimation and compensation. ^
Resumo:
The focus of this thesis is placed on text data compression based on the fundamental coding scheme referred to as the American Standard Code for Information Interchange or ASCII. The research objective is the development of software algorithms that result in significant compression of text data. Past and current compression techniques have been thoroughly reviewed to ensure proper contrast between the compression results of the proposed technique with those of existing ones. The research problem is based on the need to achieve higher compression of text files in order to save valuable memory space and increase the transmission rate of these text files. It was deemed necessary that the compression algorithm to be developed would have to be effective even for small files and be able to contend with uncommon words as they are dynamically included in the dictionary once they are encountered. A critical design aspect of this compression technique is its compatibility to existing compression techniques. In other words, the developed algorithm can be used in conjunction with existing techniques to yield even higher compression ratios. This thesis demonstrates such capabilities and such outcomes, and the research objective of achieving higher compression ratio is attained.
Resumo:
Medical imaging technologies are experiencing a growth in terms of usage and image resolution, namely in diagnostics systems that require a large set of images, like CT or MRI. Furthermore, legal restrictions impose that these scans must be archived for several years. These facts led to the increase of storage costs in medical image databases and institutions. Thus, a demand for more efficient compression tools, used for archiving and communication, is arising. Currently, the DICOM standard, that makes recommendations for medical communications and imaging compression, recommends lossless encoders such as JPEG, RLE, JPEG-LS and JPEG2000. However, none of these encoders include inter-slice prediction in their algorithms. This dissertation presents the research work on medical image compression, using the MRP encoder. MRP is one of the most efficient lossless image compression algorithm. Several processing techniques are proposed to adapt the input medical images to the encoder characteristics. Two of these techniques, namely changing the alignment of slices for compression and a pixel-wise difference predictor, increased the compression efficiency of MRP, by up to 27.9%. Inter-slice prediction support was also added to MRP, using uni and bi-directional techniques. Also, the pixel-wise difference predictor was added to the algorithm. Overall, the compression efficiency of MRP was improved by 46.1%. Thus, these techniques allow for compression ratio savings of 57.1%, compared to DICOM encoders, and 33.2%, compared to HEVC RExt Random Access. This makes MRP the most efficient of the encoders under study.