352 resultados para Iterative determinant maximization
Resumo:
Many current HCI, social networking, ubiquitous computing, and context aware designs, in order for the design to function, have access to, or collect, significant personal information about the user. This raises concerns about privacy and security, in both the research community and main-stream media. From a practical perspective, in the social world, secrecy and security form an ongoing accomplishment rather than something that is set up and left alone. We explore how design can support privacy as practical action, and investigate the notion of collective information-practice of privacy and security concerns of participants of a mobile, social software for ride sharing. This paper contributes an understanding of HCI security and privacy tensions, discovered while “designing in use” using a Reflective, Agile, Iterative Design (RAID) method.
Resumo:
Purpose – The work presented in this paper aims to provide an approach to classifying web logs by personal properties of users. Design/methodology/approach – The authors describe an iterative system that begins with a small set of manually labeled terms, which are used to label queries from the log. A set of background knowledge related to these labeled queries is acquired by combining web search results on these queries. This background set is used to obtain many terms that are related to the classification task. The system then ranks each of the related terms, choosing those that most fit the personal properties of the users. These terms are then used to begin the next iteration. Findings – The authors identify the difficulties of classifying web logs, by approaching this problem from a machine learning perspective. By applying the approach developed, the authors are able to show that many queries in a large query log can be classified. Research limitations/implications – Testing results in this type of classification work is difficult, as the true personal properties of web users are unknown. Evaluation of the classification results in terms of the comparison of classified queries to well known age-related sites is a direction that is currently being exploring. Practical implications – This research is background work that can be incorporated in search engines or other web-based applications, to help marketing companies and advertisers. Originality/value – This research enhances the current state of knowledge in short-text classification and query log learning. Classification schemes, Computer networks, Information retrieval, Man-machine systems, User interfaces
Resumo:
For the analysis of material nonlinearity, an effective shear modulus approach based on the strain control method is proposed in this paper by using point collocation method. Hencky’s total deformation theory is used to evaluate the effective shear modulus, Young’s modulus and Poisson’s ratio, which are treated as spatial field variables. These effective properties are obtained by the strain controlled projection method in an iterative manner. To evaluate the second order derivatives of shape function at the field point, the radial basis function (RBF) in the local support domain is used. Several numerical examples are presented to demonstrate the efficiency and accuracy of the proposed method and comparisons have been made with analytical solutions and the finite element method (ABAQUS).
Resumo:
Proteases regulate a spectrum of diverse physiological processes, and dysregulation of proteolytic activity drives a plethora of pathological conditions. Understanding protease function is essential to appreciating many aspects of normal physiology and progression of disease. Consequently, development of potent and specific inhibitors of proteolytic enzymes is vital to provide tools for the dissection of protease function in biological systems and for the treatment of diseases linked to aberrant proteolytic activity. The studies in this thesis describe the rational design of potent inhibitors of three proteases that are implicated in disease development. Additionally, key features of the interaction of proteases and their cognate inhibitors or substrates are analysed and a series of rational inhibitor design principles are expounded and tested. Rational design of protease inhibitors relies on a comprehensive understanding of protease structure and biochemistry. Analysis of known protease cleavage sites in proteins and peptides is a commonly used source of such information. However, model peptide substrate and protein sequences have widely differing levels of backbone constraint and hence can adopt highly divergent structures when binding to a protease’s active site. This may result in identical sequences in peptides and proteins having different conformations and diverse spatial distribution of amino acid functionalities. Regardless of this, protein and peptide cleavage sites are often regarded as being equivalent. One of the key findings in the following studies is a definitive demonstration of the lack of equivalence between these two classes of substrate and invalidation of the common practice of using the sequences of model peptide substrates to predict cleavage of proteins in vivo. Another important feature for protease substrate recognition is subsite cooperativity. This type of cooperativity is commonly referred to as protease or substrate binding subsite cooperativity and is distinct from allosteric cooperativity, where binding of a molecule distant from the protease active site affects the binding affinity of a substrate. Subsite cooperativity may be intramolecular where neighbouring residues in substrates are interacting, affecting the scissile bond’s susceptibility to protease cleavage. Subsite cooperativity can also be intermolecular where a particular residue’s contribution to binding affinity changes depending on the identity of neighbouring amino acids. Although numerous studies have identified subsite cooperativity effects, these findings are frequently ignored in investigations probing subsite selectivity by screening against diverse combinatorial libraries of peptides (positional scanning synthetic combinatorial library; PS-SCL). This strategy for determining cleavage specificity relies on the averaged rates of hydrolysis for an uncharacterised ensemble of peptide sequences, as opposed to the defined rate of hydrolysis of a known specific substrate. Further, since PS-SCL screens probe the preference of the various protease subsites independently, this method is inherently unable to detect subsite cooperativity. However, mean hydrolysis rates from PS-SCL screens are often interpreted as being comparable to those produced by single peptide cleavages. Before this study no large systematic evaluation had been made to determine the level of correlation between protease selectivity as predicted by screening against a library of combinatorial peptides and cleavage of individual peptides. This subject is specifically explored in the studies described here. In order to establish whether PS-SCL screens could accurately determine the substrate preferences of proteases, a systematic comparison of data from PS-SCLs with libraries containing individually synthesised peptides (sparse matrix library; SML) was carried out. These SML libraries were designed to include all possible sequence combinations of the residues that were suggested to be preferred by a protease using the PS-SCL method. SML screening against the three serine proteases kallikrein 4 (KLK4), kallikrein 14 (KLK14) and plasmin revealed highly preferred peptide substrates that could not have been deduced by PS-SCL screening alone. Comparing protease subsite preference profiles from screens of the two types of peptide libraries showed that the most preferred substrates were not detected by PS SCL screening as a consequence of intermolecular cooperativity being negated by the very nature of PS SCL screening. Sequences that are highly favoured as result of intermolecular cooperativity achieve optimal protease subsite occupancy, and thereby interact with very specific determinants of the protease. Identifying these substrate sequences is important since they may be used to produce potent and selective inhibitors of protolytic enzymes. This study found that highly favoured substrate sequences that relied on intermolecular cooperativity allowed for the production of potent inhibitors of KLK4, KLK14 and plasmin. Peptide aldehydes based on preferred plasmin sequences produced high affinity transition state analogue inhibitors for this protease. The most potent of these maintained specificity over plasma kallikrein (known to have a very similar substrate preference to plasmin). Furthermore, the efficiency of this inhibitor in blocking fibrinolysis in vitro was comparable to aprotinin, which previously saw clinical use to reduce perioperative bleeding. One substrate sequence particularly favoured by KLK4 was substituted into the 14 amino acid, circular sunflower trypsin inhibitor (SFTI). This resulted in a highly potent and selective inhibitor (SFTI-FCQR) which attenuated protease activated receptor signalling by KLK4 in vitro. Moreover, SFTI-FCQR and paclitaxel synergistically reduced growth of ovarian cancer cells in vitro, making this inhibitor a lead compound for further therapeutic development. Similar incorporation of a preferred KLK14 amino acid sequence into the SFTI scaffold produced a potent inhibitor for this protease. However, the conformationally constrained SFTI backbone enforced a different intramolecular cooperativity, which masked a KLK14 specific determinant. As a consequence, the level of selectivity achievable was lower than that found for the KLK4 inhibitor. Standard mechanism inhibitors such as SFTI rely on a stable acyl-enzyme intermediate for high affinity binding. This is achieved by a conformationally constrained canonical binding loop that allows for reformation of the scissile peptide bond after cleavage. Amino acid substitutions within the inhibitor to target a particular protease may compromise structural determinants that support the rigidity of the binding loop and thereby prevent the engineered inhibitor reaching its full potential. An in silico analysis was carried out to examine the potential for further improvements to the potency and selectivity of the SFTI-based KLK4 and KLK14 inhibitors. Molecular dynamics simulations suggested that the substitutions within SFTI required to target KLK4 and KLK14 had compromised the intramolecular hydrogen bond network of the inhibitor and caused a concomitant loss of binding loop stability. Furthermore in silico amino acid substitution revealed a consistent correlation between a higher frequency of formation and the number of internal hydrogen bonds of SFTI-variants and lower inhibition constants. These predictions allowed for the production of second generation inhibitors with enhanced binding affinity toward both targets and highlight the importance of considering intramolecular cooperativity effects when engineering proteins or circular peptides to target proteases. The findings from this study show that although PS-SCLs are a useful tool for high throughput screening of approximate protease preference, later refinement by SML screening is needed to reveal optimal subsite occupancy due to cooperativity in substrate recognition. This investigation has also demonstrated the importance of maintaining structural determinants of backbone constraint and conformation when engineering standard mechanism inhibitors for new targets. Combined these results show that backbone conformation and amino acid cooperativity have more prominent roles than previously appreciated in determining substrate/inhibitor specificity and binding affinity. The three key inhibitors designed during this investigation are now being developed as lead compounds for cancer chemotherapy, control of fibrinolysis and cosmeceutical applications. These compounds form the basis of a portfolio of intellectual property which will be further developed in the coming years.
Resumo:
Complex networks have been studied extensively due to their relevance to many real-world systems such as the world-wide web, the internet, biological and social systems. During the past two decades, studies of such networks in different fields have produced many significant results concerning their structures, topological properties, and dynamics. Three well-known properties of complex networks are scale-free degree distribution, small-world effect and self-similarity. The search for additional meaningful properties and the relationships among these properties is an active area of current research. This thesis investigates a newer aspect of complex networks, namely their multifractality, which is an extension of the concept of selfsimilarity. The first part of the thesis aims to confirm that the study of properties of complex networks can be expanded to a wider field including more complex weighted networks. Those real networks that have been shown to possess the self-similarity property in the existing literature are all unweighted networks. We use the proteinprotein interaction (PPI) networks as a key example to show that their weighted networks inherit the self-similarity from the original unweighted networks. Firstly, we confirm that the random sequential box-covering algorithm is an effective tool to compute the fractal dimension of complex networks. This is demonstrated on the Homo sapiens and E. coli PPI networks as well as their skeletons. Our results verify that the fractal dimension of the skeleton is smaller than that of the original network due to the shortest distance between nodes is larger in the skeleton, hence for a fixed box-size more boxes will be needed to cover the skeleton. Then we adopt the iterative scoring method to generate weighted PPI networks of five species, namely Homo sapiens, E. coli, yeast, C. elegans and Arabidopsis Thaliana. By using the random sequential box-covering algorithm, we calculate the fractal dimensions for both the original unweighted PPI networks and the generated weighted networks. The results show that self-similarity is still present in generated weighted PPI networks. This implication will be useful for our treatment of the networks in the third part of the thesis. The second part of the thesis aims to explore the multifractal behavior of different complex networks. Fractals such as the Cantor set, the Koch curve and the Sierspinski gasket are homogeneous since these fractals consist of a geometrical figure which repeats on an ever-reduced scale. Fractal analysis is a useful method for their study. However, real-world fractals are not homogeneous; there is rarely an identical motif repeated on all scales. Their singularity may vary on different subsets; implying that these objects are multifractal. Multifractal analysis is a useful way to systematically characterize the spatial heterogeneity of both theoretical and experimental fractal patterns. However, the tools for multifractal analysis of objects in Euclidean space are not suitable for complex networks. In this thesis, we propose a new box covering algorithm for multifractal analysis of complex networks. This algorithm is demonstrated in the computation of the generalized fractal dimensions of some theoretical networks, namely scale-free networks, small-world networks, random networks, and a kind of real networks, namely PPI networks of different species. Our main finding is the existence of multifractality in scale-free networks and PPI networks, while the multifractal behaviour is not confirmed for small-world networks and random networks. As another application, we generate gene interactions networks for patients and healthy people using the correlation coefficients between microarrays of different genes. Our results confirm the existence of multifractality in gene interactions networks. This multifractal analysis then provides a potentially useful tool for gene clustering and identification. The third part of the thesis aims to investigate the topological properties of networks constructed from time series. Characterizing complicated dynamics from time series is a fundamental problem of continuing interest in a wide variety of fields. Recent works indicate that complex network theory can be a powerful tool to analyse time series. Many existing methods for transforming time series into complex networks share a common feature: they define the connectivity of a complex network by the mutual proximity of different parts (e.g., individual states, state vectors, or cycles) of a single trajectory. In this thesis, we propose a new method to construct networks of time series: we define nodes by vectors of a certain length in the time series, and weight of edges between any two nodes by the Euclidean distance between the corresponding two vectors. We apply this method to build networks for fractional Brownian motions, whose long-range dependence is characterised by their Hurst exponent. We verify the validity of this method by showing that time series with stronger correlation, hence larger Hurst exponent, tend to have smaller fractal dimension, hence smoother sample paths. We then construct networks via the technique of horizontal visibility graph (HVG), which has been widely used recently. We confirm a known linear relationship between the Hurst exponent of fractional Brownian motion and the fractal dimension of the corresponding HVG network. In the first application, we apply our newly developed box-covering algorithm to calculate the generalized fractal dimensions of the HVG networks of fractional Brownian motions as well as those for binomial cascades and five bacterial genomes. The results confirm the monoscaling of fractional Brownian motion and the multifractality of the rest. As an additional application, we discuss the resilience of networks constructed from time series via two different approaches: visibility graph and horizontal visibility graph. Our finding is that the degree distribution of VG networks of fractional Brownian motions is scale-free (i.e., having a power law) meaning that one needs to destroy a large percentage of nodes before the network collapses into isolated parts; while for HVG networks of fractional Brownian motions, the degree distribution has exponential tails, implying that HVG networks would not survive the same kind of attack.
Resumo:
Commuting in various transport modes represents an activity likely to incur significant exposure to traffic emissions. This study investigated the determinants and characteristics of exposure to ultrafine (< 100 nm) particles (UFPs) in four transport modes in Sydney, with a specific focus on exposure in automobiles, which remain the transport mode of choice for approximately 70% of Sydney commuters. UFP concentrations were measured using a portable condensation particle counter (CPC) inside five automobiles commuting on above ground and tunnel roadways, and in buses, ferries and trains. Determinant factors investigated included wind speed, cabin ventilation (automobiles only) and traffic volume. The results showed that concentrations varied significantly as a consequence of transport mode, vehicle type and ventilation characteristics. The effects of wind speed were minimal relative to those of traffic volume (especially heavy diesel vehicles) and cabin ventilation, with the latter proving to be a strong determinant of UFP ingress into automobiles. The effect of ~70 minutes of commuting on total daily exposure was estimated using a range of UFP concentrations reported for several microenvironments. A hypothetical Sydney resident commuting by automobile and spending 8.5 minutes of their day in the M5 East tunnel could incur anywhere from a lower limit of 3-11% to an upper limit of 37-69% of daily UFP exposure during a return commute, depending on the concentrations they encountered in other microenvironments, the type of vehicle they used and the ventilation setting selected. However, commute-time exposures at either extreme of the values presented are unlikely to occur in practice. The range of exposures estimated for other transport modes were comparable to those of automobiles, and in the case of buses, higher than automobiles.
Resumo:
Vacuuming can be a source of indoor exposure to biological and non-biological aerosols, although there is little data that describes the magnitude of emissions from the vacuum cleaner itself. We therefore sought to quantify emission rates of particles and bacteria from a large group of vacuum cleaners and investigate their potential determinants, including temperature, dust bags, exhaust filters, price and age. Emissions of particles between 0.009 and 20 µm and bacteria were measured from 21 vacuums. Ultrafine (<100 nm) particle emission rates ranged from 4.0 × 10^6 to 1.1 × 10^11 particles min-1. Emission of 0.54 to 20 µm particles ranged from 4.0 × 10^4 to 1.2 × 10^9 particles min-1. PM2.5 emissions were between 2.4 × 10-1 and 5.4 × 10^3 µg min-1. Bacteria emissions ranged from 0 to 7.4 × 10^5 bacteria min-1 and were poorly correlated with dust bag bacteria content and particle emissions. Large variability in emission of all parameters was observed across the 21 vacuums we assessed, which was largely not attributable to the range of determinant factors we assessed. Vacuum cleaner emissions contribute to indoor exposure to non-biological and biological aerosols when vacuuming, and this may vary markedly depending on the vacuum used.
Resumo:
For many people, a relatively large proportion of daily exposure to a multitude of pollutants may occur inside an automobile. A key determinant of exposure is the amount of outdoor air entering the cabin (i.e. air change or flow rate). We have quantified this parameter in six passenger vehicles ranging in age from 18 years to <1 year, at three vehicle speeds and under four different ventilation settings. Average infiltration into the cabin with all operable air entry pathways closed was between 1 and 33.1 air changes per hour (ACH) at a vehicle speed of 60 km/h, and between 2.6 and 47.3 ACH at 110 km/h, with these results representing the most (2005 Volkswagen Golf) and least air-tight (1989 Mazda 121) vehicles, respectively. Average infiltration into stationary vehicles parked outdoors varied between ~0 and 1.4 ACH and was moderately related to wind speed. Measurements were also performed under an air recirculation setting with low fan speed, while airflow rate measurements were conducted under two non-recirculate ventilation settings with low and high fan speeds. The windows were closed in all cases, and over 200 measurements were performed. The results can be applied to estimate pollutant exposure inside vehicles.
Resumo:
Theory predicts that efficiency prevails on credence goods markets if customers are able to verify which quality they receive from an expert seller. In a series of experiments with endogenous prices we observe that variability fails to result in efficient provision behavior and leads to very similar results as a setting without variability. Some sellers always provide appropriate treatment even if own money maximization calls for over- or undertreatment. Overall our endogenous price-results suggests that both inequality aversion and a taste for efficiency play an important role for experts provision behavior. We contrast the implications of those two motivations theoretically and discriminate between them empirically using a �xed-price design. We then classify experimental experts according to their provision behavior.
Resumo:
Vitamin A deficiency (VAD) is a serious problem in developing countries, affecting approximately 127 million children of preschool age and 7.2 million pregnant women each year. However, this deficiency is readily treated and prevented through adequate nutrition. This can potentially be achieved through genetically engineered biofortification of staple food crops to enhance provitamin A (pVA) carotenoid content. Bananas are the fourth most important food crop with an annual production of 100 million tonnes and are widely consumed in areas affected by VAD. However, the fruit pVA content of most widely consumed banana cultivars is low (~ 0.2 to 0.5 ìg/g dry weight). This includes cultivars such as the East African highland banana (EAHB), the staple crop in countries such as Uganda, where annual banana consumption is approximately 250 kg per person. This fact, in addition to the agronomic properties of staple banana cultivars such as vegetative reproduction and continuous cropping, make bananas an ideal target for pVA enhancement through genetic engineering. Interestingly, there are banana varieties known with high fruit pVA content (up to 27.8 ìg/g dry weight), although they are not widely consumed due to factors such as cultural preference and availability. The genes involved in carotenoid accumulation during banana fruit ripening have not been well studied and an understanding of the molecular basis for the differential capacity of bananas to accumulate carotenoids may impact on the effective production of genetically engineered high pVA bananas. The production of phytoene by the enzyme phytoene synthase (PSY) has been shown to be an important rate limiting determinant of pVA accumulation in crop systems such as maize and rice. Manipulation of this gene in rice has been used successfully to produce Golden Rice, which exhibits higher seed endosperm pVA levels than wild type plants. Therefore, it was hypothesised that differences between high and low pVA accumulating bananas could be due either to differences in PSY enzyme activity or factors regulating the expression of the psy gene. Therefore, the aim of this thesis was to investigate the role of PSY in accumulation of pVA in banana fruit of representative high (Asupina) and low (Cavendish) pVA banana cultivars by comparing the nucleic acid and encoded amino acid sequences of the banana psy genes, in vivo enzyme activity of PSY in rice callus and expression of PSY through analysis of promoter activity and mRNA levels. Initially, partial sequences of the psy coding region from five banana cultivars were obtained using reverse transcriptase (RT)-PCR with degenerate primers designed to conserved amino acids in the coding region of available psy sequences from other plants. Based on phylogenetic analysis and comparison to maize psy sequences, it was found that in banana, psy occurs as a gene family of at least three members (psy1, psy2a and psy2b). Subsequent analysis of the complete coding regions of these genes from Asupina and Cavendish suggested that they were all capable of producing functional proteins due to high conservation in the catalytic domain. However, inability to obtain the complete mRNA sequences of Cavendish psy2a, and isolation of two non-functional Cavendish psy2a coding region variants, suggested that psy2a expression may be impaired in Cavendish. Sequence analysis indicated that these Cavendish psy2a coding region variants may have resulted from alternate splicing. Evidence of alternate splicing was also observed in one Asupina psy1 coding region variant, which was predicted to produce a functional PSY1 isoform. The complete mRNA sequence of the psy2b coding regions could not be isolated from either cultivar. Interestingly, psy1 was cloned predominantly from leaf while psy2 was obtained preferentially from fruit, suggesting some level of tissue-specific expression. The Asupina and Cavendish psy1 and psy2a coding regions were subsequently expressed in rice callus and the activity of the enzymes compared in vivo through visual observation and quantitative measurement of carotenoid accumulation. The maize B73 psy1 coding region was included as a positive control. After several weeks on selection, regenerating calli showed a range of colours from white to dark orange representing various levels of carotenoid accumulation. These results confirmed that the banana psy coding regions were all capable of producing functional enzymes. No statistically significant differences in levels of activity were observed between banana PSYs, suggesting that differences in PSY activity were not responsible for differences in the fruit pVA content of Asupina and Cavendish. The psy1 and psy2a promoter sequences were isolated from Asupina and Cavendish gDNA using a PCR-based genome walking strategy. Interestingly, three Cavendish psy2a promoter clones of different sizes, representing possible allelic variants, were identified while only single promoter sequences were obtained for the other Asupina and Cavendish psy genes. Bioinformatic analysis of these sequences identified motifs that were previously characterised in the Arabidopsis psy promoter. Notably, an ATCTA motif associated with basal expression in Arabidopsis was identified in all promoters with the exception of two of the Cavendish psy2a promoter clones (Cpsy2apr2 and Cpsy2apr3). G1 and G2 motifs, linked to light-regulated responses in Arabidopsis, appeared to be differentially distributed between psy1 and psy2a promoters. In the untranscribed regulatory regions, the G1 motifs were found only in psy1 promoters, while the G2 motifs were found only in psy2a. Interestingly, both ATCTA and G2 motifs were identified in the 5’ UTRs of Asupina and Cavendish psy1. Consistent with other monocot promoters, introns were present in the Asupina and Cavendish psy1 5’ UTRs, while none were observed in the psy2a 5’ UTRs. Promoters were cloned into expression constructs, driving the â-glucuronidase (GUS) reporter gene. Transient expression of the Asupina and Cavendish psy1 and psy2a promoters in both Cavendish embryogenic cells and Cavendish fruit demonstrated that all promoters were active, except Cpsy2apr2 and Cpsy2apr3. The functional Cavendish psy2a promoter (Cpsy2apr1) appeared to have activity similar to the Asupina psy2a promoter. The activities of the Asupina and Cavendish psy1 promoters were similar to each other, and comparable to those of the functional psy2a promoters. Semi-quantitative PCR analysis of Asupina and Cavendish psy1 and psy2a transcripts showed that psy2a levels were high in green fruit and decreased during ripening, reinforcing the hypothesis that fruit pVA levels were largely dependent on levels of psy2a expression. Additionally, semi-quantitative PCR using intron-spanning primers indicated that high levels of unprocessed psy2a and psy2b mRNA were present in the ripe fruit of Cavendish but not in Asupina. This raised the possibility that differences in intron processing may influence pVA accumulation in Asupina and Cavendish. In this study the role of PSY in banana pVA accumulation was analysed at a number of different levels. Both mRNA accumulation and promoter activity of psy genes studied were very similar between Asupina and Cavendish. However, in several experiments there was evidence of cryptic or alternate splicing that differed in Cavendish compared to Asupina, although these differences were not conclusively linked to the differences in fruit pVA accumulation between Asupina and Cavendish. Therefore, other carotenoid biosynthetic genes or regulatory mechanisms may be involved in determining pVA levels in these cultivars. This study has contributed to an increased understanding of the role of PSY in the production of pVA carotenoids in banana fruit, corroborating the importance of this enzyme in regulating carotenoid production. Ultimately, this work may serve to inform future research into pVA accumulation in important crop varieties such as the EAHB and the discovery of avenues to improve such crops through genetic modification.
Resumo:
A small array composed of three monopole elements with very small element spacing on the order of λ/6 to λ/20 is considered for application in adaptive beamforming. The properties of this 3-port array are governed by strong mutual coupling. It is shown that for signal-to-noise maximization, it is not sufficient to adjust the weights to compensate for the effects of mutual coupling. The necessity for a RF-decoupling network (RF-DN) and its simple realization are shown. The array with closely spaced elements together with the RF-DN represents a superdirective antenna with a directivity of more than 10 dBi. It is shown that the required fractional frequency bandwidth and the available unloaded Q of the antenna and RF-DN structure determine the lower limit for the element spacing.
Resumo:
Background: Integrating 3D virtual world technologies into educational subjects continues to draw the attention of educators and researchers alike. The focus of this study is the use of a virtual world, Second Life, in higher education teaching. In particular, it explores the potential of using a virtual world experience as a learning component situated within a curriculum delivered predominantly through face-to-face teaching methods. Purpose: This paper reports on a research study into the development of a virtual world learning experience designed for marketing students taking a Digital Promotions course. The experience was a field trip into Second Life to allow students to investigate how business branding practices were used for product promotion in this virtual world environment. The paper discusses the issues involved in developing and refining the virtual course component over four semesters. Methods: The study used a pedagogical action research approach, with iterative cycles of development, intervention and evaluation over four semesters. The data analysed were quantitative and qualitative student feedback collected after each field trip as well as lecturer reflections on each cycle. Sample: Small-scale convenience samples of second- and third-year students studying in a Bachelor of Business degree, majoring in marketing, taking the Digital Promotions subject at a metropolitan university in Queensland, Australia participated in the study. The samples included students who had and had not experienced the field trip. The numbers of students taking part in the field trip ranged from 22 to 48 across the four semesters. Findings and Implications: The findings from the four iterations of the action research plan helped identify key considerations for incorporating technologies into learning environments. Feedback and reflections from the students and lecturer suggested that an innovative learning opportunity had been developed. However, pedagogical potential was limited, in part, by technological difficulties and by student perceptions of relevance.
Resumo:
Design for Manufacturing (DFM) is a highly integral methodology in product development, starting from the concept development phase, with the aim of improving manufacturing productivity and maintaining product quality. While Design for Assembly (DFA) is focusing on elimination or combination of parts with other components (Boothroyd, Dewhurst and Knight, 2002), which in most cases relates to performing a function and manufacture operation in a simpler way, DFM is following a more holistic approach. During DFM, the considerable background work required for the conceptual phase is compensated for by a shortening of later development phases. Current DFM projects normally apply an iterative step-by-step approach and eventually transfer to the developer team. Although DFM has been a well established methodology for about 30 years, a Fraunhofer IAO study from 2009 found that DFM was still one of the key challenges of the German Manufacturing Industry. A new, knowledge based approach to DFM, eliminating steps of DFM, was introduced in Paul and Al-Dirini (2009). The concept focuses on a concurrent engineering process between the manufacturing engineering and product development systems, while current product realization cycles depend on a rigorous back-and-forth examine-and-correct approach so as to ensure compatibility of any proposed design to the DFM rules and guidelines adopted by the company. The key to achieving reductions is to incorporate DFM considerations into the early stages of the design process. A case study for DFM application in an automotive powertrain engineering environment is presented. It is argued that a DFM database needs to be interfaced to the CAD/CAM software, which will restrict designers to the DFM criteria. Consequently, a notable reduction of development cycles can be achieved. The case study is following the hypothesis that current DFM methods do not improve product design in a manner claimed by the DFM method. The critical case was to identify DFA/DFM recommendations or program actions with repeated appearance in different sources. Repetitive DFM measures are identified, analyzed and it is shown how a modified DFM process can mitigate a non-fully integrated DFM approach.
Resumo:
Design for Manufacturing (DFM) is a highly integral methodology in product development, starting from the concept development phase, with the aim of improving manufacturing productivity. It is used to reduce manufacturing costs in complex production environments, while maintaining product quality. While Design for Assembly (DFA) is focusing on elimination or combination of parts with other components, which in most cases relates to performing a function and manufacture operation in a simpler way, DFM is following a more holistic approach. Common consideration for DFM are standard components, manufacturing tool inventory and capability, materials compatibility with production process, part handling, logistics, tool wear and process optimization, quality control complexity or Poka-Yoke design. During DFM, the considerable background work required for the conceptual phase is compensated for by a shortening of later development phases. Current DFM projects normally apply an iterative step-by-step approach and eventually transfer to the developer team. The study is introducing a new, knowledge based approach to DFM, eliminating steps of DFM, and showing implications on the work process. Furthermore, a concurrent engineering process via transparent interface between the manufacturing engineering and product development systems is brought forward.
Resumo:
Construction is undoubtedly the most dangerous industry in Hong Kong, being responsible for 76 percent of all fatal accidents in industry in the region – around twenty times more than any other industry. In this paper, it is argued that while this rate can be largely reduced by improved production practices in isolation from the project’s physical design, there is some scope for the design team to contribute to site safety. A new safety assessment method, the Virtual Safety Assessment System (VSAS), is described which offers assistance. This involves individual construction workers being presented with 3D virtual risky scenarios of their project and a range of possible actions for selection. The method provides an analysis of results, including an assessment of the correctness or otherwise of the user’s selections, contributing to an iterative process of retraining and testing until a satisfactory level of knowledge and skill is achieved.