869 resultados para Web Applications Engineering


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce a new discrete polynomial transform constructed from the rows of Pascal’s triangle. The forward and inverse transforms are computed the same way in both the oneand two-dimensional cases, and the transform matrix can be factored into binary matrices for efficient hardware implementation. We conclude by discussing applications of the transform in

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The performance of the parallel vector implementation of the one- and two-dimensional orthogonal transforms is evaluated. The orthogonal transforms are computed using actual or modified fast Fourier transform (FFT) kernels. The factors considered in comparing the speed-up of these vectorized digital signal processing algorithms are discussed and it is shown that the traditional way of comparing th execution speed of digital signal processing algorithms by the ratios of the number of multiplications and additions is no longer effective for vector implementation; the structure of the algorithm must also be considered as a factor when comparing the execution speed of vectorized digital signal processing algorithms. Simulation results on the Cray X/MP with the following orthogonal transforms are presented: discrete Fourier transform (DFT), discrete cosine transform (DCT), discrete sine transform (DST), discrete Hartley transform (DHT), discrete Walsh transform (DWHT), and discrete Hadamard transform (DHDT). A comparison between the DHT and the fast Hartley transform is also included.(34 refs)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Protein scaffolds that support molecular recognition have multiple applications in biotechnology. Thus, protein frames with robust structural cores but adaptable surface loops are in continued demand. Recently, notable progress has been made in the characterization of Ig domains of intracellular origin--in particular, modular components of the titin myofilament. These Ig belong to the I(intermediate)-type, are remarkably stable, highly soluble and undemanding to produce in the cytoplasm of Escherichia coli. Using the Z1 domain from titin as representative, we show that the I-Ig fold tolerates the drastic diversification of its CD loop, constituting an effective peptide display system. We examine the stability of CD-loop-grafted Z1-peptide chimeras using differential scanning fluorimetry, Fourier transform infrared spectroscopy and nuclear magnetic resonance and demonstrate that the introduction of bioreactive affinity binders in this position does not compromise the structural integrity of the domain. Further, the binding efficiency of the exogenous peptide sequences in Z1 is analyzed using pull-down assays and isothermal titration calorimetry. We show that an internally grafted, affinity FLAG tag is functional within the context of the fold, interacting with the anti-FLAG M2 antibody in solution and in affinity gel. Together, these data reveal the potential of the intracellular Ig scaffold for targeted functionalization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Oncological liver surgery and interventions aim for removal of tumor tissue while preserving a sufficient amount of functional tissue to ensure organ regeneration. This requires detailed understanding of the patient-specific internal organ anatomy (blood vessel system, bile ducts, tumor location). The introduction of computer support in the surgical process enhances anatomical orientation through patient-specific 3D visualization and enables precise reproduction of planned surgical strategies though stereotactic navigation technology. This article provides clinical background information on indications and techniques for the treatment of liver tumors, reviews the technological contributions addressing the problem of organ motion during navigated surgery on a deforming organ, and finally presents an overview of the clinical experience in computer-assisted liver surgery and interventions. The review concludes that several clinically applicable solutions for computer aided liver surgery are available and small-scale clinical trials have been performed. Further developments will be required more accurate and faster handling of organ deformation and large clinical studies will be required for demonstrating the benefits of computer aided liver surgery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biodegradable polymer nanoparticles have the properties necessary to address many of the issues associated with current drug delivery techniques including targeted and controlled delivery. A novel drug delivery vehicle is proposed consisting of a poly(lactic acid) nanoparticle core, with a functionalized, mesoporous silica shell. In this study, the production of PLA nanoparticles is investigated using solvent displacement in both a batch and continuous manner, and the effects of various system parameters are examined. Using Pluronic F-127 as the stabilization agent throughout the study, PLA nanoparticles are produced through solvent displacement with diameters ranging from 200 to 250 nm using two different methods: dropwise addition and in an impinging jet mixer. The impinging jet mixer allows for easy scale-up of particle production. The concentration of surfactant and volume of quench solution is found to have minimal impact on particle diameter; however, the concentration of PLA is found to significantly impact the diameter mean and polydispersity. In addition, the stability of the PLA nanoparticles is observed to increase as residual THF is evaporated. Lastly, the isolated PLA nanoparticles are coated with a silica shell using the Stöber Process. It is found that functionalizing the silica with a phosphonic silane in the presence of excess Pluronic F-127 decreases coalescence of the particles during the coating process. Future work should be conducted to fine-tune the PLA nanoparticle synthesis process by understanding the effect of other system parameters and in synthesizing mesoporous silica shells.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Model-based calibration of steady-state engine operation is commonly performed with highly parameterized empirical models that are accurate but not very robust, particularly when predicting highly nonlinear responses such as diesel smoke emissions. To address this problem, and to boost the accuracy of more robust non-parametric methods to the same level, GT-Power was used to transform the empirical model input space into multiple input spaces that simplified the input-output relationship and improved the accuracy and robustness of smoke predictions made by three commonly used empirical modeling methods: Multivariate Regression, Neural Networks and the k-Nearest Neighbor method. The availability of multiple input spaces allowed the development of two committee techniques: a 'Simple Committee' technique that used averaged predictions from a set of 10 pre-selected input spaces chosen by the training data and the "Minimum Variance Committee" technique where the input spaces for each prediction were chosen on the basis of disagreement between the three modeling methods. This latter technique equalized the performance of the three modeling methods. The successively increasing improvements resulting from the use of a single best transformed input space (Best Combination Technique), Simple Committee Technique and Minimum Variance Committee Technique were verified with hypothesis testing. The transformed input spaces were also shown to improve outlier detection and to improve k-Nearest Neighbor performance when predicting dynamic emissions with steady-state training data. An unexpected finding was that the benefits of input space transformation were unaffected by changes in the hardware or the calibration of the underlying GT-Power model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The article summarizes the collective views expressed at the fourth session of the workshop Tissue Engineering-the Next Generation, which was devoted to the translation of results of tissue engineering research into applications. Ernst Hunziker described the paradigm of a dual translational approach, and argued that tissue engineering should be guided by the dimensions and physiological setting of the bodily compartment to be repaired. Myron Spector discussed collagen-glycosaminoglycan (GAG) scaffolds for musculoskeletal tissue engineering. Jeanette Libera focused on the biological and clinical aspects of cartilage tissue engineering, and described a completely autologous procedure for engineering cartilage using the patient's own chondrocytes and blood serum. Arthur Gertzman reviewed the applications of allograft tissues in orthopedic surgery, and outlined the potential of allograft tissues as models for biological and medical studies. Savio Woo discussed a list of functional tissue engineering approaches designed to restore the biochemical and biomechanical properties of injured ligaments and tendons to be closer to that of the normal tissues. Specific examples of using biological scaffolds that have chemoattractants as well as growth factors with unique contact guidance properties to improve their healing process were shown. Anthony Ratcliffe discussed the translation of the results of research into products that are profitable and meet regulatory requirements. Michael Lysaght challenged the proposition that commercial and clinical failures of early tissue engineering products demonstrate a need for more focus on basic research. Arthur Coury described the evolution of tissue engineering products based on the example of Genzyme, and how various definitions of success and failure can affect perceptions and policies relative to the status and advancement of the field of tissue engineering.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Following last two years’ workshop on dynamic languages at the ECOOP conference, the Dyla 2007 workshop was a successful and popular event. As its name implies, the workshop’s focus was on dynamic languages and their applications. Topics and discussions at the workshop included macro expansion mechanisms, extension of the method lookup algorithm, language interpretation, reflexivity and languages for mobile ad hoc networks. The main goal of this workshop was to bring together different dynamic language communities and favouring cross communities interaction. Dyla 2007 was organised as a full day meeting, partly devoted to presentation of submitted position papers and partly devoted to tool demonstration. All accepted papers can be downloaded from the workshop’s web site. In this report, we provide an overview of the presentations and a summary of discussions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Geospatial information systems are used to analyze spatial data to provide decision makers with relevant, up-to-date, information. The processing time required for this information is a critical component to response time. Despite advances in algorithms and processing power, we still have many “human-in-the-loop” factors. Given the limited number of geospatial professionals, analysts using their time effectively is very important. The automation and faster humancomputer interactions of common tasks that will not disrupt their workflow or attention is something that is very desirable. The following research describes a novel approach to increase productivity with a wireless, wearable, electroencephalograph (EEG) headset within the geospatial workflow.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thermally conductive resins are a class of material that show promise in many different applications. One growing field for their use is in the area of bipolar plate technology for fuel cell applications. In this work, a LCP was mixed with different types of carbon fillers to determine the effects of the individual carbon fillers on the thermal conductivity of the composite resin. In addition, mathematical modeling was performed on the thermal conductivity data with the goal of developing predictive models for the thermal conductivity of highly filled composite resins.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Routine bridge inspections require labor intensive and highly subjective visual interpretation to determine bridge deck surface condition. Light Detection and Ranging (LiDAR) a relatively new class of survey instrument has become a popular and increasingly used technology for providing as-built and inventory data in civil applications. While an increasing number of private and governmental agencies possess terrestrial and mobile LiDAR systems, an understanding of the technology’s capabilities and potential applications continues to evolve. LiDAR is a line-of-sight instrument and as such, care must be taken when establishing scan locations and resolution to allow the capture of data at an adequate resolution for defining features that contribute to the analysis of bridge deck surface condition. Information such as the location, area, and volume of spalling on deck surfaces, undersides, and support columns can be derived from properly collected LiDAR point clouds. The LiDAR point clouds contain information that can provide quantitative surface condition information, resulting in more accurate structural health monitoring. LiDAR scans were collected at three study bridges, each of which displayed a varying degree of degradation. A variety of commercially available analysis tools and an independently developed algorithm written in ArcGIS Python (ArcPy) were used to locate and quantify surface defects such as location, volume, and area of spalls. The results were visual and numerically displayed in a user-friendly web-based decision support tool integrating prior bridge condition metrics for comparison. LiDAR data processing procedures along with strengths and limitations of point clouds for defining features useful for assessing bridge deck condition are discussed. Point cloud density and incidence angle are two attributes that must be managed carefully to ensure data collected are of high quality and useful for bridge condition evaluation. When collected properly to ensure effective evaluation of bridge surface condition, LiDAR data can be analyzed to provide a useful data set from which to derive bridge deck condition information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The demands in production and associate costs at power generation through non renewable resources are increasing at an alarming rate. Solar energy is one of the renewable resource that has the potential to minimize this increase. Utilization of solar energy have been concentrated mainly on heating application. The use of solar energy in cooling systems in building would benefit greatly achieving the goal of non-renewable energy minimization. The approaches of solar energy heating system research done by initiation such as University of Wisconsin at Madison and building heat flow model research conducted by Oklahoma State University can be used to develop and optimize solar cooling building system. The research uses two approaches to develop a Graphical User Interface (GUI) software for an integrated solar absorption cooling building model, which is capable of simulating and optimizing the absorption cooling system using solar energy as the main energy source to drive the cycle. The software was then put through a number of litmus test to verify its integrity. The litmus test was conducted on various building cooling system data sets of similar applications around the world. The output obtained from the software developed were identical with established experimental results from the data sets used. Software developed by other research are catered for advanced users. The software developed by this research is not only reliable in its code integrity but also through its integrated approach which is catered for new entry users. Hence, this dissertation aims to correctly model a complete building with the absorption cooling system in appropriate climate as a cost effective alternative to conventional vapor compression system.