993 resultados para APPEAR


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The effects of medical grade polycaprolactone–tricalcium phosphate (mPCL–TCP) (80:20) scaffolds on primary human alveolar osteoblasts (AOs) were compared with standard tissue-culture plates. Of the seeded AOs, 70% adhered to and proliferated on the scaffold surface and within open and interconnected pores; they formed multi-layered sheets and collagen fibers with uniform distribution within 28 days. Elevation of alkaline phosphatase activity occurred in scaffold–cell constructs independent of osteogenic induction. AO proliferation rate increased and significant decrease in calcium concentration of the medium for both scaffolds and plates under induction conditions were seen. mPCL–TCP scaffolds significantly influenced the AO expression pattern of osterix and osteocalcin (OCN). Osteogenic induction down-regulated OCN at both RNA and protein level on scaffolds (3D) by day 7, and up-regulated OCN in cell-culture plates (2D) by day 14, but OCN levels on scaffolds were higher than on cell-culture plates. Immunocytochemical signals for type I collagen, osteopontin and osteocalcin were detected at the outer parts of scaffold–cell constructs. More mineral nodules were found in induced than in non-induced constructs. Only induced 2D cultures showed nodule formation. mPCL–TCP scaffolds appear to stimulate osteogenesis in vitro by activating a cellular response in AO's to form mineralized tissue. There is a fundamental difference between culturing AOs on 2D and 3D environments that should be considered when studying osteogenesis in vitro.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The cascading appearance-based (CAB) feature extraction technique has established itself as the state-of-the-art in extracting dynamic visual speech features for speech recognition. In this paper, we will focus on investigating the effectiveness of this technique for the related speaker verification application. By investigating the speaker verification ability of each stage of the cascade we will demonstrate that the same steps taken to reduce static speaker and environmental information for the visual speech recognition application also provide similar improvements for visual speaker recognition. A further study is conducted comparing synchronous HMM (SHMM) based fusion of CAB visual features and traditional perceptual linear predictive (PLP) acoustic features to show that higher complexity inherit in the SHMM approach does not appear to provide any improvement in the final audio-visual speaker verification system over simpler utterance level score fusion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Crash prediction models are used for a variety of purposes including forecasting the expected future performance of various transportation system segments with similar traits. The influence of intersection features on safety have been examined extensively because intersections experience a relatively large proportion of motor vehicle conflicts and crashes compared to other segments in the transportation system. The effects of left-turn lanes at intersections in particular have seen mixed results in the literature. Some researchers have found that left-turn lanes are beneficial to safety while others have reported detrimental effects on safety. This inconsistency is not surprising given that the installation of left-turn lanes is often endogenous, that is, influenced by crash counts and/or traffic volumes. Endogeneity creates problems in econometric and statistical models and is likely to account for the inconsistencies reported in the literature. This paper reports on a limited-information maximum likelihood (LIML) estimation approach to compensate for endogeneity between left-turn lane presence and angle crashes. The effects of endogeneity are mitigated using the approach, revealing the unbiased effect of left-turn lanes on crash frequency for a dataset of Georgia intersections. The research shows that without accounting for endogeneity, left-turn lanes ‘appear’ to contribute to crashes; however, when endogeneity is accounted for in the model, left-turn lanes reduce angle crash frequencies as expected by engineering judgment. Other endogenous variables may lurk in crash models as well, suggesting that the method may be used to correct simultaneity problems with other variables and in other transportation modeling contexts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many industrial processes and systems can be modelled mathematically by a set of Partial Differential Equations (PDEs). Finding a solution to such a PDF model is essential for system design, simulation, and process control purpose. However, major difficulties appear when solving PDEs with singularity. Traditional numerical methods, such as finite difference, finite element, and polynomial based orthogonal collocation, not only have limitations to fully capture the process dynamics but also demand enormous computation power due to the large number of elements or mesh points for accommodation of sharp variations. To tackle this challenging problem, wavelet based approaches and high resolution methods have been recently developed with successful applications to a fixedbed adsorption column model. Our investigation has shown that recent advances in wavelet based approaches and high resolution methods have the potential to be adopted for solving more complicated dynamic system models. This chapter will highlight the successful applications of these new methods in solving complex models of simulated-moving-bed (SMB) chromatographic processes. A SMB process is a distributed parameter system and can be mathematically described by a set of partial/ordinary differential equations and algebraic equations. These equations are highly coupled; experience wave propagations with steep front, and require significant numerical effort to solve. To demonstrate the numerical computing power of the wavelet based approaches and high resolution methods, a single column chromatographic process modelled by a Transport-Dispersive-Equilibrium linear model is investigated first. Numerical solutions from the upwind-1 finite difference, wavelet-collocation, and high resolution methods are evaluated by quantitative comparisons with the analytical solution for a range of Peclet numbers. After that, the advantages of the wavelet based approaches and high resolution methods are further demonstrated through applications to a dynamic SMB model for an enantiomers separation process. This research has revealed that for a PDE system with a low Peclet number, all existing numerical methods work well, but the upwind finite difference method consumes the most time for the same degree of accuracy of the numerical solution. The high resolution method provides an accurate numerical solution for a PDE system with a medium Peclet number. The wavelet collocation method is capable of catching up steep changes in the solution, and thus can be used for solving PDE models with high singularity. For the complex SMB system models under consideration, both the wavelet based approaches and high resolution methods are good candidates in terms of computation demand and prediction accuracy on the steep front. The high resolution methods have shown better stability in achieving steady state in the specific case studied in this Chapter.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Within a surveillance video, occlusions are commonplace, and accurately resolving these occlusions is key when seeking to accurately track objects. The challenge of accurately segmenting objects is further complicated by the fact that within many real-world surveillance environments, the objects appear very similar. For example, footage of pedestrians in a city environment will consist of many people wearing dark suits. In this paper, we propose a novel technique to segment groups and resolve occlusions using optical flow discontinuities. We demonstrate that the ratio of continuous to discontinuous pixels within a region can be used to locate the overlapping edges, and incorporate this into an object tracking framework. Results on a portion of the ETISEO database show that the proposed algorithm results in improved tracking performance overall, and improved tracking within occlusions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In Australia rural research and development corporations and companies expended over $AUS500 million on agricultural research and development. A substantial proportion of this is invested in R&D in the beef industry. The Australian beef industry exports almost $AUS5billionof product annually and invest heavily in new product development to improve the beef quality and improve production efficiency. Review points are critical for effective new product development, yet many research and development bodies, particularly publicly funded ones, appear to ignore the importance of assessing products prior to their release. Significant sums of money are invested in developing technological innovations that have low levels and rates of adoption. The adoption rates could be improved if the developers were more focused on technology uptake and less focused on proving their technologies can be applied in practice. Several approaches have been put forward in an effort to improve rates of adoption into operational settings. This paper presents a study of key technological innovations in the Australian beef industry to assess the use of multiple criteria in evaluating the potential uptake of new technologies. Findings indicate that using multiple criteria to evaluate innovations before commercializing a technology enables researchers to better understand the issues that may inhibit adoption.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Increasingly, celebrities appear not only as endorsers for products but are apparently engaged in entrepreneurial roles as initiators, owners and perhaps even managers in the ventures that market the products they promote. Despite being extensively referred to in popular media, scholars have been slow to recognise the importance of this new phenomenon. This thesis argues theoretically and shows empirically that celebrity entrepreneurs are more effective communicators than typical celebrity endorsers because of their increased engagement with ventures. I theorise that greater engagement increases the celebrity‘s emotional involvement as perceived by consumers. This is an endorser quality thus far neglected in the marketing communications literature. In turn, emotional involvement, much like the empirically established dimensions trustworthiness, expertise and attractiveness, should affect traditional outcome variables such as attitude towards the advertisement and brand. On the downside, increases in celebrity engagement may lead to relatively stronger and worsening changes in attitudes towards the brand if and when negative information about the celebrity is revealed. A series of eight experiments was conducted on 781 Swedish and Baltic students and 151 Swedish retirees. Though there were nuanced differences and additional complexities in each experiment, participants‘ reactions to advertisements containing a celebrity portrayed as a typical endorser or entrepreneur were recorded. The overall results of these experiments suggest that emotional involvement can be successfully operationalised as distinct from variables previously known to influence communication effectiveness. In addition, emotional involvement has positive effects on attitudes toward the advertisement and brand that are as strong as the predictors traditionally applied in the marketing communications literature. Moreover, the celebrity entrepreneur condition in the experimental manipulation consistently led to an increase in emotional involvement and to a lesser extent trustworthiness, but not expertise and attractiveness. Finally, negative celebrity information led to a change in participants‘ attitudes towards the brand which were more strongly negative for celebrity entrepreneurs than celebrity endorsers. In addition, the effect of negative celebrity information on a company‘s brand is worse when they support the celebrity rather than fire them. However, this effect did not appear to interact with the celebrity‘s purported engagement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Micro-finance, which includes micro-credit as one it its services, has become big business with a range of models – from those that operate on a strictly business basis to those who come from a philanthropic base, through Non Government organisations (NGOs). Success is often measured by the numbers involved and the repayment rates – which are very high, largely because of the lending models used. The purpose of this paper is to identify whether the means used to deliver micro-credit services to the poor are socially responsible. This paper will explore the range of models currently used and propose a model that addresses some of the social responsibility issues that appear to plague delivery. The model is being developed in Beira, the second largest city in Mozambique. Mozambique exhibits many of the characteristics found in other African countries, so the model, if successful, may have implications for other poor African nations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present several new observations on the SMS4 block cipher, and discuss their cryptographic significance. The crucial observation is the existence of fixed points and also of simple linear relationships between the bits of the input and output words for each component of the round functions for some input words. This implies that the non-linear function T of SMS4 does not appear random and that the linear transformation provides poor diffusion. Furthermore, the branch number of the linear transformation in the key scheduling algorithm is shown to be less than optimal. The main security implication of these observations is that the round function is not always non-linear. Due to this linearity, it is possible to reduce the number of effective rounds of SMS4 by four. We also investigate the susceptibility of SMS4 to further cryptanalysis. Finally, we demonstrate a successful differential attack on a slightly modified variant of SMS4. These findings raise serious questions on the security provided by SMS4.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Level crossing crashes have been shown to result in enormous human and financial cost to society. According to the Australian Transport Safety Bureau (ATSB) [5] a total of 632 Railway Level crossing (RLX) collisions, between trains and road vehicles, occurred in Australia between 2001 and June 2009. The cost of RLX collisions runs into the tens of millions of dollars each year in Australia [6]. In addition, loss of life and injury are commonplace in instances where collisions occur. Based on estimates that 40% of rail related fatalities occur at level crossings [12], it is estimated that 142 deaths between 2001 and June 2009 occurred at RLX. The aim of this paper is to (i) summarise crash patterns in Australia, (ii) review existing international ITS interventions to improve level crossing and (iii) highlights open human factors research related issues. Human factors (e.g., driver error, lapses or violations) have been evidenced as a significant contributing factor in RLX collisions, with drivers of road vehicles particularly responsible for many collisions. Unintentional errors have been found to contribute to 46% of RLX collisions [6] and appear to be far more commonplace than deliberate violations. Humans have been found to be inherently inadequate at using the sensory information available to them to facilitate safe decision-making at RLX and tend to underestimate the speed of approaching large objects due to the non-linear increases in perceived size [6]. Collisions resulting from misjudgements of train approach speed and distance are common [20]. Thus, a fundamental goal for improved RLX safety is the provision of sufficient contextual information to road vehicle drivers to facilitate safe decision-making regarding crossing behaviours.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aims--Telemonitoring (TM) and structured telephone support (STS) have the potential to deliver specialised management to more patients with chronic heart failure (CHF), but their efficacy is still to be proven. Objectives To review randomised controlled trials (RCTs) of TM or STS on all- cause mortality and all-cause and CHF-related hospitalisations in patients with CHF, as a non-invasive remote model of specialised disease-management intervention.--Methods and Results--Data sources:We searched 15 electronic databases and hand-searched bibliographies of relevant studies, systematic reviews, and meeting abstracts. Two reviewers independently extracted all data. Study eligibility and participants: We included any randomised controlled trials (RCT) comparing TM or STS to usual care of patients with CHF. Studies that included intensified management with additional home or clinic visits were excluded. Synthesis: Primary outcomes (mortality and hospitalisations) were analysed; secondary outcomes (cost, length of stay, quality of life) were tabulated.--Results: Thirty RCTs of STS and TM were identified (25 peer-reviewed publications (n=8,323) and five abstracts (n=1,482)). Of the 25 peer-reviewed studies, 11 evaluated TM (2,710 participants), 16 evaluated STS (5,613 participants) and two tested both interventions. TM reduced all-cause mortality (risk ratio (RR 0•66 [95% CI 0•54-0•81], p<0•0001) and STS showed similar trends (RR 0•88 [95% CI 0•76-1•01], p=0•08). Both TM (RR 0•79 [95% CI 0•67-0•94], p=0•008) and STS (RR 0•77 [95% CI 0•68-0•87], p<0•0001) reduced CHF-related hospitalisations. Both interventions improved quality of life, reduced costs, and were acceptable to patients. Improvements in prescribing, patient-knowledge and self-care, and functional class were observed.--Conclusion: TM and STS both appear effective interventions to improve outcomes in patients with CHF.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Facebook has clocked up some 400 million registered users worldwide; Twitter has just reached the 100 million mark. Within these communities, Australians appear to be particularly active: we lead the world by spending nearly eight hours per month using social media. These figures highlight the fact that for most businesses, social media are now important for engaging with customers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Driver aggression is an increasing concern for motorists, with some research suggesting that drivers who behave aggressively perceive their actions as justified by the poor driving of others. Thus attributions may play an important role in understanding driver aggression. A convenience sample of 193 drivers (aged 17-36) randomly assigned to two separate roles (‘perpetrators’ and ‘victims’) responded to eight scenarios of driver aggression. Drivers also completed the Aggression Questionnaire and Driving Anger Scale. Consistent with the actor-observer bias, ‘victims’ (or recipients) in this study were significantly more likely than ‘perpetrators’ (or instigators) to endorse inadequacies in the instigator’s driving skills as the cause of driver aggression. Instigators were significantly more likely attribute the depicted behaviours to external but temporary causes (lapses in judgement or errors) rather than stable causes. This suggests that instigators recognised drivers as responsible for driving aggressively but downplayed this somewhat in comparison to ‘victims’/recipients. Recipients and instigators agreed that the behaviours were examples of aggressive driving but instigators appeared to focus on the degree of intentionality of the driver in making their assessments while recipients appeared to focus on the safety implications. Contrary to expectations, instigators gave mean ratings of the emotional impact of driving aggression on recipients that were higher in all cases than the mean ratings given by the recipients. Drivers appear to perceive aggressive behaviours as modifiable, with the implication that interventions could appeal to drivers’ sense of self-efficacy to suggest strategies for overcoming plausible and modifiable attributions (e.g. lapses in judgement; errors) underpinning behaviours perceived as aggressive.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This chapter sets out the debates about the changing role of audiences in relation to user-created content as they appear in New Media and Cultural Studies. The discussion moves beyond the simple dichotomies between active producers and passive audiences, and draws on empirical evidence, in order to examine those practices that are most ordinary and widespread. Building on the knowledge of television’s role in facilitating public life, and the everyday, affective practices through which it is experienced and used, I focus on the way in which YouTube operates as a site of community, creativity and cultural citizenship; and as an archive of popular cultural memory.