818 resultados para Functorial Embedding
Resumo:
Redistributed manufacturing is an emerging concept which captures the anticipated reshoring and localisation of production from large scale manufacturing plants to smaller-scale localised, customisable production units, largely driven by new additive digital production technologies. Critically, community based digital fabrication workshops, or makespaces, are anticipated to be the hothouse for this new era of localised production and as such are key to future sustainable design and manufacturing practices. In parallel, the concept of the circular economy (CE) conceptualises the move from a linear economy of take-make-waste to a closed loop system, through repair, remanufacturing, refurbishment and recycling which maintains the value of materials and resources. Despite the clear interplay between RdM and CE, there is limited research exploring this relationship. In light of these interconnected developments, the aim of this paper is to explore the role of makespaces in contributing to a circular economy through RdM activities. This is achieved through six semi-structured interviews with thought leaders on these topics. The research findings identify barriers and opportunities to both CE and RdM, uncover key overlaps between CE and RdM, and identify a range of future research directions that can support the coming together of these areas. The research contributes to a wider conversation on embedding circular practices within makespaces and their role in RdM.
Resumo:
Redistributed manufacturing is an emerging concept which captures the anticipated reshoring and localisation of production from large scale mass manufacturing plants to smaller-scale localised, customisable production units, largely driven by new digital production technologies. Critically, community-based digital fabrication workshops, or makespaces, are anticipated to be one hothouse for this new era of localised production and as such are key to future sustainable design and manufacturing practices. In parallel, the concept of the circular economy conceptualises the move from a linear economy of take-make-waste to a closed loop system, through repair, remanufacturing, and recycling to ultimately extend the value of products and materials. Despite the clear interplay between redistributed manufacturing and circular economy, there is limited research exploring this relationship. In light of these interconnected developments, the aim of this paper is to explore the role of makespaces in contributing to a circular economy through redistributed manufacturing activities. This is achieved through six semi-structured interviews with thought leaders on these topics. The research findings identify barriers and opportunities to both circular economy and redistributed manufacturing, uncover overlaps between circular economy and redistributed manufacturing, and identify a range of future research directions that can support the coming together of these areas. The research contributes to a wider conversation on embedding circular practices within makespaces and their role in redistributed manufacturing.
Resumo:
A three-dimensional finite volume, unstructured mesh (FV-UM) method for dynamic fluid–structure interaction (DFSI) is described. Fluid structure interaction, as applied to flexible structures, has wide application in diverse areas such as flutter in aircraft, wind response of buildings, flows in elastic pipes and blood vessels. It involves the coupling of fluid flow and structural mechanics, two fields that are conventionally modelled using two dissimilar methods, thus a single comprehensive computational model of both phenomena is a considerable challenge. Until recently work in this area focused on one phenomenon and represented the behaviour of the other more simply. More recently, strategies for solving the full coupling between the fluid and solid mechanics behaviour have been developed. A key contribution has been made by Farhat et al. [Int. J. Numer. Meth. Fluids 21 (1995) 807] employing FV-UM methods for solving the Euler flow equations and a conventional finite element method for the elastic solid mechanics and the spring based mesh procedure of Batina [AIAA paper 0115, 1989] for mesh movement. In this paper, we describe an approach which broadly exploits the three field strategy described by Farhat for fluid flow, structural dynamics and mesh movement but, in the context of DFSI, contains a number of novel features: a single mesh covering the entire domain, a Navier–Stokes flow, a single FV-UM discretisation approach for both the flow and solid mechanics procedures, an implicit predictor–corrector version of the Newmark algorithm, a single code embedding the whole strategy.
Resumo:
I explore transformative social innovation in agriculture through a particular case of agroecological innovation, the System of Rice Intensification (SRI) in India. Insights from social innovation theory that emphasize the roles of social movements and the reengagement of vulnerable populations in societal transformation can help reinstate the missing “social” dimension in current discourses on innovation in India. India has a rich and vibrant tradition of social innovation wherein vulnerable communities have engaged in collective experimentation. This is often missed in official or formal accounts. Social innovations such as SRI can help recreate these possibilities for change from outside the mainstream due to newer opportunities that networks present in the twenty-first century. I show how local and international networks led by Civil Society Organizations have reinterpreted and reconstructed game-changing macrotrends in agriculture. This has enabled the articulation and translation of an alternative paradigm for sustainable transitions within agriculture from outside formal research channels. These social innovations, however, encounter stiff opposition from established actors in agricultural research systems. Newer heterogeneous networks, as witnessed in SRI, provide opportunities for researchers within hierarchical research systems to explore, experiment, and create newer norms of engagement with Civil Society Organizations and farmers. I emphasize valuing and embedding diversity of practices and institutions at an early stage to enable systems to be more resilient and adaptable in sustainable transitions.
Resumo:
Abstract : Since at least the 1980's, a growing number of companies have set up an ethics or a compliance program within their organization. However, in the field of study of business management, there is a paucity of research studies concerning these management systems. This observation warranted the present investigation of one company's compliance program. Compliance programs are set up so that individuals working within an organization observe the laws and regulations which pertain to their work. This study used a constructivist grounded theory methodology to examine the process by which a specific compliance program, that of Siemens Canada Limited, was implemented throughout its organization. In conformity with this methodology, instead of proceeding with the investigation in accordance to a particular theoretical framework, the study established a number of theoretical constructs used strictly as reference points. The study's research question was stated as: what are the characteristics of the process by which Siemens' compliance program integrated itself into the existing organizational structure and gained employee acceptance? Data consisted of documents produced by the company and of interviews done with twenty-four managers working for Siemens Canada Limited. The researcher used QSR-Nvivo computer assisted software to code transcripts and to help with analyzing interviews and documents. Triangulation was done by using a number of analysis techniques and by constantly comparing findings with extant theory. A descriptive model of the implementation process grounded in the experience of participants and in the contents of the documents emerged from the data. The process was called "Remolding"; remolding being the core category having emerged. This main process consisted of two sub-processes identified as "embedding" and "appraising." The investigation was able to provide a detailed account of the appraising process. It identified that employees appraised the compliance program according to three facets: the impact of the program on the employee's daily activities, the relationship employees have with the local compliance organization, and the relationship employees have with the corporate ethics identity. The study suggests that a company who is entertaining the idea of implementing a compliance program should consider all three facets. In particular, it suggests that any company interested in designing and implementing a compliance program should pay particular attention to its corporate ethics identity. This is because employee's acceptance of the program is influenced by their comparison of the company's ethics identity to their local ethics identity. Implications of the study suggest that personnel responsible for the development and organizational support of a compliance program should understand the appraisal process by which employees build their relationship with the program. The originality of this study is that it points emphatically that companies must pay special attention in developing a corporate ethics identify which is coherent, well documented and well explained.
Resumo:
This interactive symposium will focus on the use of different technologies in developing innovative practice in teacher education at one university in England. Technology Enhanced Learning (TEL) is a field of educational policy and practice that has the power to ignite diametrically opposing views and reactions amongst teachers and teacher educators, ranging across a spectrum from immense enthusiasm to untold terror. In a field where the skills and experience of individuals vary from those of digital natives (Prensky 2001) to lags and lurkers in digital spaces, the challenges of harnessing the potential of TEL are complex. The challenges include developing the IT skills of trainees and educators and the creative application of these skills to pedagogy in all areas of the curriculum. The symposium draws on examples from primary, secondary and post-compulsory teacher education to discuss issues and approaches to developing research capacity and innovative practice using different etools, many of which are freely available. The first paper offers theoretical and policy perspectives on finding spaces in busy professional lives to engage in research and develop research-informed practice. It draws on notions of teachers as researchers, practitioner research and evidenc-ebased practice to argue that engagement in research is integral to teacher education and an empowering source of creative professional learning for teachers and teacher educators. Whilst acknowledging the challenges of this stance, examples from our own research practice illustrate how e-tools can assist us in building the capacity and confidence of staff and students in researching and enhancing teaching, learning and assessment practice. The second paper discusses IT skills development through the TEL pathway for trainee teachers in secondary education across different curriculum subjects. The lead tutor for the TEL pathway will use examples of activities developed with trainee teachers and university subject tutors to enhance their skills in using e-tools, such as QR codes, Kahoot, Padlet, Pinterest and cloud based learning. The paper will also focus on how these skills and tools can be used for action Discussant - the wider use of technologies in a university centre for teacher education; course management, recruitment and mentor training. research, evaluation and feedback and for marking and administrative tasks. The discussion will finish with thoughts on widening trainee teachers’ horizons into the future direction of educational technology. The third paper considers institutional policies and strategies for promoting and embedding TEL, including an initiative called ‘The Learning Conversation’, which aims ‘to share, highlight, celebrate, discuss, problematise, find things out...’ about TEL through an online space. The lead for ‘The Learning Conversation’ will offer reflections on this and other initiatives across the institution involving trainee teachers, university subject tutors, librarians and staff in student support services who are using TEL to engage, enthuse and support students on campus and during placements in schools. The fourth paper reflects on the use of TEL to engage with trainee teachers in post-compulsory education. This sector of education and training is more fragmented than primary and secondary schools sectors and so the challenges of building a community of practice that can support the development of innovative practice are greater.
Resumo:
While news stories are an important traditional medium to broadcast and consume news, microblogging has recently emerged as a place where people can dis- cuss, disseminate, collect or report information about news. However, the massive information in the microblogosphere makes it hard for readers to keep up with these real-time updates. This is especially a problem when it comes to breaking news, where people are more eager to know “what is happening”. Therefore, this dis- sertation is intended as an exploratory effort to investigate computational methods to augment human effort when monitoring the development of breaking news on a given topic from a microblog stream by extractively summarizing the updates in a timely manner. More specifically, given an interest in a topic, either entered as a query or presented as an initial news report, a microblog temporal summarization system is proposed to filter microblog posts from a stream with three primary concerns: topical relevance, novelty, and salience. Considering the relatively high arrival rate of microblog streams, a cascade framework consisting of three stages is proposed to progressively reduce quantity of posts. For each step in the cascade, this dissertation studies methods that improve over current baselines. In the relevance filtering stage, query and document expansion techniques are applied to mitigate sparsity and vocabulary mismatch issues. The use of word embedding as a basis for filtering is also explored, using unsupervised and supervised modeling to characterize lexical and semantic similarity. In the novelty filtering stage, several statistical ways of characterizing novelty are investigated and ensemble learning techniques are used to integrate results from these diverse techniques. These results are compared with a baseline clustering approach using both standard and delay-discounted measures. In the salience filtering stage, because of the real-time prediction requirement a method of learning verb phrase usage from past relevant news reports is used in conjunction with some standard measures for characterizing writing quality. Following a Cranfield-like evaluation paradigm, this dissertation includes a se- ries of experiments to evaluate the proposed methods for each step, and for the end- to-end system. New microblog novelty and salience judgments are created, building on existing relevance judgments from the TREC Microblog track. The results point to future research directions at the intersection of social media, computational jour- nalism, information retrieval, automatic summarization, and machine learning.
Resumo:
As collections of archived digital documents continue to grow the maintenance of an archive, and the quality of reproduction from the archived format, become important long-term considerations. In particular, Adobe s PDF is now an important final form standard for archiving and distributing electronic versions of technical documents. It is important that all embedded images in the PDF, and any fonts used for text rendering, should at the very minimum be easily readable on screen. Unfortunately, because PDF is based on PostScript technology, it allows the embedding of bitmap fonts in Adobe Type 3 format as well as higher-quality outline fonts in TrueType or Adobe Type 1 formats. Bitmap fonts do not generally perform well when they are scaled and rendered on low-resolution devices such as workstation screens. The work described here investigates how a plug-in to Adobe Acrobat enables bitmap fonts to be substituted by corresponding outline fonts using a checksum matching technique against a canonical set of bitmap fonts, as originally distributed. The target documents for our initial investigations are those PDF files produced by (La)TEXsystems when set up in a default (bitmap font) configuration. For all bitmap fonts where recognition exceeds a certain confidence threshold replacement fonts in Adobe Type 1 (outline) format can be substituted with consequent improvements in file size, screen display quality and rendering speed. The accuracy of font recognition is discussed together with the prospects of extending these methods to bitmap-font PDF files from sources other than (La)TEX.
Resumo:
We classify the N = 4 supersymmetric AdS(5) backgrounds that arise as solutions of five-dimensional N = 4 gauged supergravity. We express our results in terms of the allowed embedding tensor components and identify the structure of the associated gauge groups. We show that the moduli space of these AdS vacua is of the form SU(1, m)/ (U(1) x SU(m)) and discuss our results regarding holographically dual N = 2 SCFTs and their conformal manifolds.
Resumo:
The goal of image retrieval and matching is to find and locate object instances in images from a large-scale image database. While visual features are abundant, how to combine them to improve performance by individual features remains a challenging task. In this work, we focus on leveraging multiple features for accurate and efficient image retrieval and matching. We first propose two graph-based approaches to rerank initially retrieved images for generic image retrieval. In the graph, vertices are images while edges are similarities between image pairs. Our first approach employs a mixture Markov model based on a random walk model on multiple graphs to fuse graphs. We introduce a probabilistic model to compute the importance of each feature for graph fusion under a naive Bayesian formulation, which requires statistics of similarities from a manually labeled dataset containing irrelevant images. To reduce human labeling, we further propose a fully unsupervised reranking algorithm based on a submodular objective function that can be efficiently optimized by greedy algorithm. By maximizing an information gain term over the graph, our submodular function favors a subset of database images that are similar to query images and resemble each other. The function also exploits the rank relationships of images from multiple ranked lists obtained by different features. We then study a more well-defined application, person re-identification, where the database contains labeled images of human bodies captured by multiple cameras. Re-identifications from multiple cameras are regarded as related tasks to exploit shared information. We apply a novel multi-task learning algorithm using both low level features and attributes. A low rank attribute embedding is joint learned within the multi-task learning formulation to embed original binary attributes to a continuous attribute space, where incorrect and incomplete attributes are rectified and recovered. To locate objects in images, we design an object detector based on object proposals and deep convolutional neural networks (CNN) in view of the emergence of deep networks. We improve a Fast RCNN framework and investigate two new strategies to detect objects accurately and efficiently: scale-dependent pooling (SDP) and cascaded rejection classifiers (CRC). The SDP improves detection accuracy by exploiting appropriate convolutional features depending on the scale of input object proposals. The CRC effectively utilizes convolutional features and greatly eliminates negative proposals in a cascaded manner, while maintaining a high recall for true objects. The two strategies together improve the detection accuracy and reduce the computational cost.
Resumo:
There are two main aims of the paper. The first one is to extend the criterion for the precompactness of sets in Banach function spaces to the setting of quasi-Banach function spaces. The second one is to extend the criterion for the precompactness of sets in the Lebesgue spaces $L_p(\Rn)$, $1 \leq p < \infty$, to the so-called power quasi-Banach function spaces.
These criteria are applied to establish compact embeddings of abstract Besov spaces into quasi-Banach function spaces. The results are illustrated on embeddings of Besov spaces $B^s_{p,q}(\Rn)$, $0
Resumo:
Starting in December 1982 the University of Nottingham decided to phototypeset almost all of its examination papers `in house' using the troff, tbl and eqn programs running under UNIX. This tutorial lecture highlights the features of the three programs with particular reference to their strengths and weaknesses in a production environment. The following issues are particularly addressed: Standards -- all three software packages require the embedding of commands and the invocation of pre-written macros, rather than `what you see is what you get'. This can help to enforce standards, in the absence of traditional compositor skills. Hardware and Software -- the requirements are analysed for an inexpensive preview facility and a low-level interface to the phototypesetter. Mathematical and Technical papers -- the fine-tuning of eqn to impose a standard house style. Staff skills and training -- systems of this kind do not require the operators to have had previous experience of phototypesetting. Of much greater importance is willingness and flexibility in learning how to use computer systems.
Resumo:
Network Virtualization is a key technology for the Future Internet, allowing the deployment of multiple independent virtual networks that use resources of the same basic infrastructure. An important challenge in the dynamic provision of virtual networks resides in the optimal allocation of physical resources (nodes and links) to requirements of virtual networks. This problem is known as Virtual Network Embedding (VNE). For the resolution of this problem, previous research has focused on designing algorithms based on the optimization of a single objective. On the contrary, in this work we present a multi-objective algorithm, called VNE-MO-ILP, for solving dynamic VNE problem, which calculates an approximation of the Pareto Front considering simultaneously resource utilization and load balancing. Experimental results show evidences that the proposed algorithm is better or at least comparable to a state-of-the-art algorithm. Two performance metrics were simultaneously evaluated: (i) Virtual Network Request Acceptance Ratio and (ii) Revenue/Cost Relation. The size of test networks used in the experiments shows that the proposed algorithm scales well in execution times, for networks of 84 nodes
Resumo:
We study a one-dimensional lattice model of interacting spinless fermions. This model is integrable for both periodic and open boundary conditions; the latter case includes the presence of Grassmann valued non-diagonal boundary fields breaking the bulk U(1) symmetry of the model. Starting from the embedding of this model into a graded Yang-Baxter algebra, an infinite hierarchy of commuting transfer matrices is constructed by means of a fusion procedure. For certain values of the coupling constant related to anisotropies of the underlying vertex model taken at roots of unity, this hierarchy is shown to truncate giving a finite set of functional equations for the spectrum of the transfer matrices. For generic coupling constants, the spectral problem is formulated in terms of a functional (or TQ-)equation which can be solved by Bethe ansatz methods for periodic and diagonal open boundary conditions. Possible approaches for the solution of the model with generic non-diagonal boundary fields are discussed.
Resumo:
Many existing encrypted Internet protocols leak information through packet sizes and timing. Though seemingly innocuous, prior work has shown that such leakage can be used to recover part or all of the plaintext being encrypted. The prevalence of encrypted protocols as the underpinning of such critical services as e-commerce, remote login, and anonymity networks and the increasing feasibility of attacks on these services represent a considerable risk to communications security. Existing mechanisms for preventing traffic analysis focus on re-routing and padding. These prevention techniques have considerable resource and overhead requirements. Furthermore, padding is easily detectable and, in some cases, can introduce its own vulnerabilities. To address these shortcomings, we propose embedding real traffic in synthetically generated encrypted cover traffic. Novel to our approach is our use of realistic network protocol behavior models to generate cover traffic. The observable traffic we generate also has the benefit of being indistinguishable from other real encrypted traffic further thwarting an adversary's ability to target attacks. In this dissertation, we introduce the design of a proxy system called TrafficMimic that implements realistic cover traffic tunneling and can be used alone or integrated with the Tor anonymity system. We describe the cover traffic generation process including the subtleties of implementing a secure traffic generator. We show that TrafficMimic cover traffic can fool a complex protocol classification attack with 91% of the accuracy of real traffic. TrafficMimic cover traffic is also not detected by a binary classification attack specifically designed to detect TrafficMimic. We evaluate the performance of tunneling with independent cover traffic models and find that they are comparable, and, in some cases, more efficient than generic constant-rate defenses. We then use simulation and analytic modeling to understand the performance of cover traffic tunneling more deeply. We find that we can take measurements from real or simulated traffic with no tunneling and use them to estimate parameters for an accurate analytic model of the performance impact of cover traffic tunneling. Once validated, we use this model to better understand how delay, bandwidth, tunnel slowdown, and stability affect cover traffic tunneling. Finally, we take the insights from our simulation study and develop several biasing techniques that we can use to match the cover traffic to the real traffic while simultaneously bounding external information leakage. We study these bias methods using simulation and evaluate their security using a Bayesian inference attack. We find that we can safely improve performance with biasing while preventing both traffic analysis and defense detection attacks. We then apply these biasing methods to the real TrafficMimic implementation and evaluate it on the Internet. We find that biasing can provide 3-5x improvement in bandwidth for bulk transfers and 2.5-9.5x speedup for Web browsing over tunneling without biasing.