983 resultados para Freezing and processing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Sweet cherries (Prunus avium L.) are a nutritious fruit which are rich in polyphenols and have high antioxidant potential. Most sweet cherries are consumed fresh and a small proportion of the total sweet cherries production is value added to make processed food products. Sweet cherries are highly perishable fruit with a short harvest season, therefore extensive preservation and processing methods have been developed for the extension of their shelf-life and distribution of their products. Scope and Approach In this review, the main physicochemical properties of sweet cherries, as well as bioactive components and their determination methods are described. The study emphasises the recent progress of postharvest technology, such as controlled/modified atmosphere storage, edible coatings, irradiation, and biological control agents, to maintain sweet cherries for the fresh market. Valorisations of second-grade sweet cherries, as well as trends for the diversification of cherry products for future studies are also discussed. Key Findings and Conclusions Sweet cherry fruit have a short harvest period and marketing window. The major loss in quality after harvest include moisture loss, softening, decay and stem browning. Without compromising their eating quality, the extension in fruit quality and shelf-life for sweet cherries is feasible by means of combination of good handling practice and applications of appropriate postharvest technology. With the drive of health-food sector, the potential of using second class cherries including cherry stems as a source of bioactive compound extraction is high, as cherry fruit is well-known for being rich in health-promoting components.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The handling and processing of fish in Uganda has until recently been carried out exclusively by the artisanal fishermen and fish processors. Their operations have left much to be desired as the product is often of low quality 'and its keeping time is limited. The handling of fresh fish has been without refrigeration but with the recent establishment of commercial fish processing plants a cold chain of fish distribution is being set up for domestic and export markets. Some of the fishermen are beginning to ice their catch immediately after reaching the shore. It is hoped that fishmongers will increasingly find it more profitable to market their products iced. This will make fish available to a large sector of the population and in the process there will be reduced post-harvest losses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Common computational principles underlie processing of various visual features in the cortex. They are considered to create similar patterns of contextual modulations in behavioral studies for different features as orientation and direction of motion. Here, I studied the possibility that a single theoretical framework, implemented in different visual areas, of circular feature coding and processing could explain these similarities in observations. Stimuli were created that allowed direct comparison of the contextual effects on orientation and motion direction with two different psychophysical probes: changes in weak and strong signal perception. One unique simplified theoretical model of circular feature coding including only inhibitory interactions, and decoding through standard vector average, successfully predicted the similarities in the two domains, while different feature population characteristics explained well the differences in modulation on both experimental probes. These results demonstrate how a single computational principle underlies processing of various features across the cortices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

If a bathymetric echosounder is the essential device to carry on hydrographic surveys, other external sensors are absolutely also necessary (positioning system, motion unit or sound velocity profiler). And because sound doesn‛t go straight away into the whole bathymetric swath its measurement and processing are very sensitive for all the water column. DORIS is the very answer for an operational sound velocity profile processing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Physiological signals, which are controlled by the autonomic nervous system (ANS), could be used to detect the affective state of computer users and therefore find applications in medicine and engineering. The Pupil Diameter (PD) seems to provide a strong indication of the affective state, as found by previous research, but it has not been investigated fully yet. In this study, new approaches based on monitoring and processing the PD signal for off-line and on-line affective assessment (“relaxation” vs. “stress”) are proposed. Wavelet denoising and Kalman filtering methods are first used to remove abrupt changes in the raw Pupil Diameter (PD) signal. Then three features (PDmean, PDmax and PDWalsh) are extracted from the preprocessed PD signal for the affective state classification. In order to select more relevant and reliable physiological data for further analysis, two types of data selection methods are applied, which are based on the paired t-test and subject self-evaluation, respectively. In addition, five different kinds of the classifiers are implemented on the selected data, which achieve average accuracies up to 86.43% and 87.20%, respectively. Finally, the receiver operating characteristic (ROC) curve is utilized to investigate the discriminating potential of each individual feature by evaluation of the area under the ROC curve, which reaches values above 0.90. For the on-line affective assessment, a hard threshold is implemented first in order to remove the eye blinks from the PD signal and then a moving average window is utilized to obtain the representative value PDr for every one-second time interval of PD. There are three main steps for the on-line affective assessment algorithm, which are preparation, feature-based decision voting and affective determination. The final results show that the accuracies are 72.30% and 73.55% for the data subsets, which were respectively chosen using two types of data selection methods (paired t-test and subject self-evaluation). In order to further analyze the efficiency of affective recognition through the PD signal, the Galvanic Skin Response (GSR) was also monitored and processed. The highest affective assessment classification rate obtained from GSR processing is only 63.57% (based on the off-line processing algorithm). The overall results confirm that the PD signal should be considered as one of the most powerful physiological signals to involve in future automated real-time affective recognition systems, especially for detecting the “relaxation” vs. “stress” states.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Various piezoelectric polymers based on polyvinylidene fluoride (PVDF) are of interest for large aperture space-based telescopes. Dimensional adjustments of adaptive polymer films depend on charge deposition and require a detailed understanding of the piezoelectric material responses which are expected to deteriorate owing to strong vacuum UV, � -, X-ray, energetic particles and atomic oxygen exposure. We have investigated the degradation of PVDF and its copolymers under various stress environments detrimental to reliable operation in space. Initial radiation aging studies have shown complex material changes with lowered Curie temperatures, complex material changes with lowered melting points, morphological transformations and significant crosslinking, but little influence on piezoelectric d33 constants. Complex aging processes have also been observed in accelerated temperature environments inducing annealing phenomena and cyclic stresses. The results suggest that poling and chain orientation are negatively affected by radiation and temperature exposure. A framework for dealing with these complex material qualification issues and overall system survivability predictions in low earth orbit conditions has been established. It allows for improved material selection, feedback for manufacturing and processing, material optimization/stabilization strategies and provides guidance on any alternative materials.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

To date, studies have focused on the acquisition of alphabetic second languages (L2s) in alphabetic first language (L1) users, demonstrating significant transfer effects. The present study examined the process from a reverse perspective, comparing logographic (Mandarin-Chinese) and alphabetic (English) L1 users in the acquisition of an artificial logographic script, in order to determine whether similar language-specific advantageous transfer effects occurred. English monolinguals, English-French bilinguals and Chinese-English bilinguals learned a small set of symbols in an artificial logographic script and were subsequently tested on their ability to process this script in regard to three main perspectives: L2 reading, L2 working memory (WM), and inner processing strategies. In terms of L2 reading, a lexical decision task on the artificial symbols revealed markedly faster response times in the Chinese-English bilinguals, indicating a logographic transfer effect suggestive of a visual processing advantage. A syntactic decision task evaluated the degree to which the new language was mastered beyond the single word level. No L1-specific transfer effects were found for artificial language strings. In order to investigate visual processing of the artificial logographs further, a series of WM experiments were conducted. Artificial logographs were recalled under concurrent auditory and visuo-spatial suppression conditions to disrupt phonological and visual processing, respectively. No L1-specific transfer effects were found, indicating no visual processing advantage of the Chinese-English bilinguals. However, a bilingual processing advantage was found indicative of a superior ability to control executive functions. In terms of L1 WM, the Chinese-English bilinguals outperformed the alphabetic L1 users when processing L1 words, indicating a language experience-specific advantage. Questionnaire data on the cognitive strategies that were deployed during the acquisition and processing of the artificial logographic script revealed that the Chinese-English bilinguals rated their inner speech as lower than the alphabetic L1 users, suggesting that they were transferring their phonological processing skill set to the acquisition and use of an artificial script. Overall, evidence was found to indicate that language learners transfer specific L1 orthographic processing skills to L2 logographic processing. Additionally, evidence was also found indicating that a bilingual history enhances cognitive performance in L2.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Acoustic emission (AE) is the phenomenon where high frequency stress waves are generated by rapid release of energy within a material by sources such as crack initiation or growth. AE technique involves recording these stress waves by means of sensors placed on the surface and subsequent analysis of the recorded signals to gather information such as the nature and location of the source. AE is one of the several non-destructive testing (NDT) techniques currently used for structural health monitoring (SHM) of civil, mechanical and aerospace structures. Some of its advantages include ability to provide continuous in-situ monitoring and high sensitivity to crack activity. Despite these advantages, several challenges still exist in successful application of AE monitoring. Accurate localization of AE sources, discrimination between genuine AE sources and spurious noise sources and damage quantification for severity assessment are some of the important issues in AE testing and will be discussed in this paper. Various data analysis and processing approaches will be applied to manage those issues.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Signal Processing (SP) is a subject of central importance in engineering and the applied sciences. Signals are information-bearing functions, and SP deals with the analysis and processing of signals (by dedicated systems) to extract or modify information. Signal processing is necessary because signals normally contain information that is not readily usable or understandable, or which might be disturbed by unwanted sources such as noise. Although many signals are non-electrical, it is common to convert them into electrical signals for processing. Most natural signals (such as acoustic and biomedical signals) are continuous functions of time, with these signals being referred to as analog signals. Prior to the onset of digital computers, Analog Signal Processing (ASP) and analog systems were the only tool to deal with analog signals. Although ASP and analog systems are still widely used, Digital Signal Processing (DSP) and digital systems are attracting more attention, due in large part to the significant advantages of digital systems over the analog counterparts. These advantages include superiority in performance,s peed, reliability, efficiency of storage, size and cost. In addition, DSP can solve problems that cannot be solved using ASP, like the spectral analysis of multicomonent signals, adaptive filtering, and operations at very low frequencies. Following the recent developments in engineering which occurred in the 1980's and 1990's, DSP became one of the world's fastest growing industries. Since that time DSP has not only impacted on traditional areas of electrical engineering, but has had far reaching effects on other domains that deal with information such as economics, meteorology, seismology, bioengineering, oceanology, communications, astronomy, radar engineering, control engineering and various other applications. This book is based on the Lecture Notes of Associate Professor Zahir M. Hussain at RMIT University (Melbourne, 2001-2009), the research of Dr. Amin Z. Sadik (at QUT & RMIT, 2005-2008), and the Note of Professor Peter O'Shea at Queensland University of Technology. Part I of the book addresses the representation of analog and digital signals and systems in the time domain and in the frequency domain. The core topics covered are convolution, transforms (Fourier, Laplace, Z. Discrete-time Fourier, and Discrete Fourier), filters, and random signal analysis. There is also a treatment of some important applications of DSP, including signal detection in noise, radar range estimation, banking and financial applications, and audio effects production. Design and implementation of digital systems (such as integrators, differentiators, resonators and oscillators are also considered, along with the design of conventional digital filters. Part I is suitable for an elementary course in DSP. Part II (which is suitable for an advanced signal processing course), considers selected signal processing systems and techniques. Core topics covered are the Hilbert transformer, binary signal transmission, phase-locked loops, sigma-delta modulation, noise shaping, quantization, adaptive filters, and non-stationary signal analysis. Part III presents some selected advanced DSP topics. We hope that this book will contribute to the advancement of engineering education and that it will serve as a general reference book on digital signal processing.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Many optical networks are limited in speed and processing capability due to the necessity for the optical signal to be converted to an electrical signal and back again. In addition, electronically manipulated interconnects in an otherwise optical network lead to overly complicated systems. Optical spatial solitons are optical beams that propagate without spatial divergence. They are capable of phase dependent interactions, and have therefore been extensively researched as suitable all optical interconnects for over 20 years. However, they require additional external components, initially high voltage power sources were required, several years later, high power background illumination had replaced the high voltage. However, these additional components have always remained as the greatest hurdle in realising the applications of the interactions of spatial optical solitons as all optical interconnects. Recently however, self-focusing was observed in an otherwise self-defocusing photorefractive crystal. This observation raises the possibility of the formation of soliton-like fields in unbiased self-defocusing media, without the need for an applied electrical field or background illumination. This thesis will present an examination of the possibility of the formation of soliton-like low divergence fields in unbiased self-defocusing photorefractive media. The optimal incident beam and photorefractive media parameters for the formation of these fields will be presented, together with an analytical and numerical study of the effect of these parameters. In addition, preliminary examination of the interactions of two of these fields will be presented. In order to complete an analytical examination of the field propagating through the photorefractive medium, the spatial profile of the beam after propagation through the medium was determined. For a low power solution, it was found that an incident Gaussian field maintains its Gaussian profile as it propagates. This allowed the beam at all times to be described by an individual complex beam parameter, while also allowing simple analytical solutions to the appropriate wave equation. An analytical model was developed to describe the effect of the photorefractive medium on the Gaussian beam. Using this model, expressions for the required intensity dependent change in both the real and imaginary components of the refractive index were found. Numerical investigation showed that under certain conditions, a low powered Gaussian field could propagate in self-defocusing photorefractive media with divergence of approximately 0.1 % per metre. An investigation into the parameters of a Ce:BaTiO3 crystal showed that the intensity dependent absorption is wavelength dependent, and can in fact transition to intensity dependent transparency. Thus, with careful wavelength selection, the required intensity dependent change in both the real and imaginary components of the refractive index for the formation of a low divergence Gaussian field are physically realisable. A theoretical model incorporating the dependence of the change in real and imaginary components of the refractive index on propagation distance was developed. Analytical and numerical results from this model are congruent with the results from the previous model, showing low divergence fields with divergence less than 0.003 % over the propagation length of the photorefractive medium. In addition, this approach also confirmed the previously mentioned self-focusing effect of the self-defocusing media, and provided an analogy to a negative index GRIN lens with an intensity dependent focal length. Experimental results supported the findings of the numerical analysis. Two low divergence fields were found to possess the ability to interact in a Ce:BaTiO3 crystal in a soliton-like fashion. The strength of these interactions was found to be dependent on the degree of divergence of the individual beams. This research found that low-divergence fields are possible in unbiased self-defocusing photorefractive media, and that soliton-like interactions between two of these fields are possible. However, in order for these types of fields to be used in future all optical interconnects, the manipulation of these interactions, together with the ability for these fields to guide a second beam at a different wavelength, must be investigated.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Several authors stress the importance of data’s crucial foundation for operational, tactical and strategic decisions (e.g., Redman 1998, Tee et al. 2007). Data provides the basis for decision making as data collection and processing is typically associated with reducing uncertainty in order to make more effective decisions (Daft and Lengel 1986). While the first series of investments of Information Systems/Information Technology (IS/IT) into organizations improved data collection, restricted computational capacity and limited processing power created challenges (Simon 1960). Fifty years on, capacity and processing problems are increasingly less relevant; in fact, the opposite exists. Determining data relevance and usefulness is complicated by increased data capture and storage capacity, as well as continual improvements in information processing capability. As the IT landscape changes, businesses are inundated with ever-increasing volumes of data from both internal and external sources available on both an ad-hoc and real-time basis. More data, however, does not necessarily translate into more effective and efficient organizations, nor does it increase the likelihood of better or timelier decisions. This raises questions about what data managers require to assist their decision making processes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Airports represent the epitome of complex systems with multiple stakeholders, multiple jurisdictions and complex interactions between many actors. The large number of existing models that capture different aspects of the airport are a testament to this. However, these existing models do not consider in a systematic sense modelling requirements nor how stakeholders such as airport operators or airlines would make use of these models. This can detrimentally impact on the verification and validation of models and makes the development of extensible and reusable modelling tools difficult. This paper develops from the Concept of Operations (CONOPS) framework a methodology to help structure the review and development of modelling capabilities and usage scenarios. The method is applied to the review of existing airport terminal passenger models. It is found that existing models can be broadly categorised according to four usage scenarios: capacity planning, operational planning and design, security policy and planning, and airport performance review. The models, the performance metrics that they evaluate and their usage scenarios are discussed. It is found that capacity and operational planning models predominantly focus on performance metrics such as waiting time, service time and congestion whereas performance review models attempt to link those to passenger satisfaction outcomes. Security policy models on the other hand focus on probabilistic risk assessment. However, there is an emerging focus on the need to be able to capture trade-offs between multiple criteria such as security and processing time. Based on the CONOPS framework and literature findings, guidance is provided for the development of future airport terminal models.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Over the last twenty years, the use of open content licenses has become increasingly and surprisingly popular. The use of such licences challenges the traditional incentive-based model of exclusive rights under copyright. Instead of providing a means to charge for the use of particular works, what seems important is mitigating against potential personal harm to the author and, in some cases, preventing non-consensual commercial exploitation. It is interesting in this context to observe the primacy of what are essentially moral rights over the exclusionary economic rights. The core elements of common open content licences map somewhat closely to continental conceptions of the moral rights of authorship. Most obviously, almost all free software and free culture licences require attribution of authorship. More interestingly, there is a tension between social norms developed in free software communities and those that have emerged in the creative arts over integrity and commercial exploitation. For programmers interested in free software, licence terms that prohibit commercial use or modification are almost completely inconsistent with the ideological and utilitarian values that underpin the movement. For those in the creative industries, on the other hand, non-commercial terms and, to a lesser extent, terms that prohibit all but verbatim distribution continue to play an extremely important role in the sharing of copyright material. While prohibitions on commercial use often serve an economic imperative, there is also a certain personal interest for many creators in avoiding harmful exploitation of their expression – an interest that has sometimes been recognised as forming a component of the moral right of integrity. One particular continental moral right – the right of withdrawal – is present neither in Australian law or in any of the common open content licences. Despite some marked differences, both free software and free culture participants are using contractual methods to articulate the norms of permissible sharing. Legal enforcement is rare and often prohibitively expensive, and the various communities accordingly rely upon shared understandings of acceptable behaviour. The licences that are commonly used represent a formalised expression of these community norms and provide the theoretically enforceable legal baseline that lends them legitimacy. The core terms of these licences are designed primarily to alleviate risk in sharing and minimise transaction costs in sharing and using copyright expression. Importantly, however, the range of available licences reflect different optional balances in the norms of creating and sharing material. Generally, it is possible to see that, stemming particularly from the US, open content licences are fundamentally important in providing a set of normatively accepted copyright balances that reflect the interests sought to be protected through moral rights regimes. As the cost of creation, distribution, storage, and processing of expression continues to fall towards zero, there are increasing incentives to adopt open content licences to facilitate wide distribution and reuse of creative expression. Thinking of these protocols not only as reducing transaction costs but of setting normative principles of participation assists in conceptualising the role of open content licences and the continuing tensions that permeate modern copyright law.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Different types of defects can be introduced into graphene during material synthesis, and significantly influence the properties of graphene. In this work, we investigated the effects of structural defects, edge functionalisation and reconstruction on the fracture strength and morphology of graphene by molecular dynamics simulations. The minimum energy path analysis was conducted to investigate the formation of Stone-Wales defects. We also employed out-of-plane perturbation and energy minimization principle to study the possi-ble morphology of graphene nanoribbons with edge-termination. Our numerical results show that the fracture strength of graphene is dependent on defects and environmental temperature. However, pre-existing defects may be healed, resulting in strength recovery. Edge functionalization can induce compressive stress and ripples in the edge areas of gra-phene nanoribbons. On the other hand, edge reconstruction contributed to the tensile stress and curved shape in the graphene nanoribbons.