ResearchPad - General Engineering Default RSS Feed en-us © 2020 Newgen KnowledgeWorks <![CDATA[Climate risk index for Italy]]>

We describe a climate risk index that has been developed to inform national climate adaptation planning in Italy and that is further elaborated in this paper. The index supports national authorities in designing adaptation policies and plans, guides the initial problem formulation phase, and identifies administrative areas with higher propensity to being adversely affected by climate change. The index combines (i) climate change-amplified hazards; (ii) high-resolution indicators of exposure of chosen economic, social, natural and built- or manufactured capital (MC) assets and (iii) vulnerability, which comprises both present sensitivity to climate-induced hazards and adaptive capacity. We use standardized anomalies of selected extreme climate indices derived from high-resolution regional climate model simulations of the EURO-CORDEX initiative as proxies of climate change-altered weather and climate-related hazards. The exposure and sensitivity assessment is based on indicators of manufactured, natural, social and economic capital assets exposed to and adversely affected by climate-related hazards. The MC refers to material goods or fixed assets which support the production process (e.g. industrial machines and buildings); Natural Capital comprises natural resources and processes (renewable and non-renewable) producing goods and services for well-being; Social Capital (SC) addressed factors at the individual (people's health, knowledge, skills) and collective (institutional) level (e.g. families, communities, organizations and schools); and Economic Capital (EC) includes owned and traded goods and services. The results of the climate risk analysis are used to rank the subnational administrative and statistical units according to the climate risk challenges, and possibly for financial resource allocation for climate adaptation.

This article is part of the theme issue ‘Advances in risk assessment for climate change adaptation policy’.

<![CDATA[Hybrid Perovskites: Prospects for Concentrator Solar Cells]]>


Perovskite solar cells have shown a meteoric rise of power conversion efficiency and a steady pace of improvements in their stability of operation. Such rapid progress has triggered research into approaches that can boost efficiencies beyond the Shockley–Queisser limit stipulated for a single‐junction cell under normal solar illumination conditions. The tandem solar cell architecture is one concept here that has recently been successfully implemented. However, the approach of solar concentration has not been sufficiently explored so far for perovskite photovoltaics, despite its frequent use in the area of inorganic semiconductor solar cells. Here, the prospects of hybrid perovskites are assessed for use in concentrator solar cells. Solar cell performance parameters are theoretically predicted as a function of solar concentration levels, based on representative assumptions of charge‐carrier recombination and extraction rates in the device. It is demonstrated that perovskite solar cells can fundamentally exhibit appreciably higher energy‐conversion efficiencies under solar concentration, where they are able to exceed the Shockley–Queisser limit and exhibit strongly elevated open‐circuit voltages. It is therefore concluded that sufficient material and device stability under increased illumination levels will be the only significant challenge to perovskite concentrator solar cell applications.

<![CDATA[Coherent Nanotwins and Dynamic Disorder in Cesium Lead Halide Perovskite Nanocrystals]]>


Crystal defects in highy luminescent colloidal nanocrystals (NCs) of CsPbX3 perovskites (X = Cl, Br, I) are investigated. Here, using X-ray total scattering techniques and the Debye scattering equation (DSE), we provide evidence that the local structure of these NCs always exhibits orthorhombic tilting of PbX6 octahedra within locally ordered subdomains. These subdomains are hinged through a two-/three-dimensional (2D/3D) network of twin boundaries through which the coherent arrangement of the Pb ions throughout the whole NC is preserved. The density of these twin boundaries determines the size of the subdomains and results in an apparent higher-symmetry structure on average in the high-temperature modification. Dynamic cooperative rotations of PbX6 octahedra are likely at work at the twin boundaries, causing the rearrangement of the 2D or 3D network, particularly effective in the pseudocubic phases. An orthorhombic, 3D γ-phase, isostructural to that of CsPbBr3 is found here in as-synthesized CsPbI3 NCs.

<![CDATA[Probing DNA Translocations with Inplane Current Signals in a Graphene Nanoribbon with a Nanopore]]>


Many theoretical studies predict that DNA sequencing should be feasible by monitoring the transverse current through a graphene nanoribbon while a DNA molecule translocates through a nanopore in that ribbon. Such a readout would benefit from the special transport properties of graphene, provide ultimate spatial resolution because of the single-atom layer thickness of graphene, and facilitate high-bandwidth measurements. Previous experimental attempts to measure such transverse inplane signals were however dominated by a trivial capacitive response. Here, we explore the feasibility of the approach using a custom-made differential current amplifier that discriminates between the capacitive current signal and the resistive response in the graphene. We fabricate well-defined short and narrow (30 nm × 30 nm) nanoribbons with a 5 nm nanopore in graphene with a high-temperature scanning transmission electron microscope to retain the crystallinity and sensitivity of the graphene. We show that, indeed, resistive modulations can be observed in the graphene current due to DNA translocation through the nanopore, thus demonstrating that DNA sensing with inplane currents in graphene nanostructures is possible. The approach is however exceedingly challenging due to low yields in device fabrication connected to the complex multistep device layout.

<![CDATA[Pressure-Induced Melting of Confined Ice]]>


The classic regelation experiment of Thomson in the 1850s deals with cutting an ice cube, followed by refreezing. The cutting was attributed to pressure-induced melting but has been challenged continuously, and only lately consensus emerged by understanding that compression shortens the O:H nonbond and lengthens the H–O bond simultaneously. This H–O elongation leads to energy loss and lowers the melting point. The hot debate survived well over 150 years, mainly due to a poorly defined heat exchange with the environment in the experiment. In our current experiment, we achieved thermal isolation from the environment and studied the fully reversible ice–liquid water transition for water confined between graphene and muscovite mica. We observe a transition from two-dimensional (2D) ice into a quasi-liquid phase by applying a pressure exerted by an atomic force microscopy tip. At room temperature, the critical pressure amounts to about 6 GPa. The transition is completely reversible: refreezing occurs when the applied pressure is lifted. The critical pressure to melt the 2D ice decreases with temperature, and we measured the phase coexistence line between 293 and 333 K. From a Clausius–Clapeyron analysis, we determine the latent heat of fusion of two-dimensional ice at 0.15 eV/molecule, being twice as large as that of bulk ice.

<![CDATA[Contents: (Adv. Sci. 12/2017)]]> ]]> <![CDATA[An investigation into inflection-point instability in the entrance region of a pulsating pipe flow]]>

This paper investigates the inflection-point instability that governs the flow disturbance initiated in the entrance region of a pulsating pipe flow. Under such a flow condition, the flow instability grows within a certain phase region in a pulsating cycle, during which the inflection point in the unsteady mean flow lifts away from the viscous effect-dominated region known as the Stokes layer. The characteristic frequency of the instability is found to be in agreement with that predicted by the mixing-layer model. In comparison with those cases not falling in this category, it is further verified that the flow phenomenon will take place only if the inflection point lifts away sufficiently from the Stokes layer.

<![CDATA[Making a meaningful impact: modelling simultaneous frictional collisions in spatial multibody systems]]> <![CDATA[Thickness-dependent electrocaloric effect in mixed-phase Pb0.87Ba0.1 La0.02(Zr0.6Sn0.33Ti0.07)O3 thin films]]> <![CDATA[A universal metric for ferroic energy materials]]> <![CDATA[Arts of electrical impedance tomographic sensing]]> <![CDATA[Alterations in the coupling functions between cortical and cardio-respiratory oscillations due to anaesthesia with propofol and sevoflurane]]> <![CDATA[Communication networks beyond the capacity crunch]]>

This issue of Philosophical Transactions of the Royal Society, Part A represents a summary of the recent discussion meeting ‘Communication networks beyond the capacity crunch’. The purpose of the meeting was to establish the nature of the capacity crunch, estimate the time scales associated with it and to begin to find solutions to enable continued growth in a post-crunch era. The meeting confirmed that, in addition to a capacity shortage within a single optical fibre, many other ‘crunches’ are foreseen in the field of communications, both societal and technical. Technical crunches identified included the nonlinear Shannon limit, wireless spectrum, distribution of 5G signals (front haul and back haul), while societal influences included net neutrality, creative content generation and distribution and latency, and finally energy and cost. The meeting concluded with the observation that these many crunches are genuine and may influence our future use of technology, but encouragingly noted that research and business practice are already moving to alleviate many of the negative consequences.

<![CDATA[Maximizing the optical network capacity]]>

Most of the digital data transmitted are carried by optical fibres, forming the great part of the national and international communication infrastructure. The information-carrying capacity of these networks has increased vastly over the past decades through the introduction of wavelength division multiplexing, advanced modulation formats, digital signal processing and improved optical fibre and amplifier technology. These developments sparked the communication revolution and the growth of the Internet, and have created an illusion of infinite capacity being available. But as the volume of data continues to increase, is there a limit to the capacity of an optical fibre communication channel? The optical fibre channel is nonlinear, and the intensity-dependent Kerr nonlinearity limit has been suggested as a fundamental limit to optical fibre capacity. Current research is focused on whether this is the case, and on linear and nonlinear techniques, both optical and electronic, to understand, unlock and maximize the capacity of optical communications in the nonlinear regime. This paper describes some of them and discusses future prospects for success in the quest for capacity.

<![CDATA[New optical fibres for high-capacity optical communications]]>

Researchers are within a factor of 2 or so from realizing the maximum practical transmission capacity of conventional single-mode fibre transmission technology. It is therefore timely to consider new technological approaches offering the potential for more cost-effective scaling of network capacity than simply installing more and more conventional single-mode systems in parallel. In this paper, I review physical layer options that can be considered to address this requirement including the potential for reduction in both fibre loss and nonlinearity for single-mode fibres, the development of ultra-broadband fibre amplifiers and finally the use of space division multiplexing.

<![CDATA[From photons to big-data applications: terminating terabits]]>

Computer architectures have entered a watershed as the quantity of network data generated by user applications exceeds the data-processing capacity of any individual computer end-system. It will become impossible to scale existing computer systems while a gap grows between the quantity of networked data and the capacity for per system data processing. Despite this, the growth in demand in both task variety and task complexity continues unabated. Networked computer systems provide a fertile environment in which new applications develop. As networked computer systems become akin to infrastructure, any limitation upon the growth in capacity and capabilities becomes an important constraint of concern to all computer users. Considering a networked computer system capable of processing terabits per second, as a benchmark for scalability, we critique the state of the art in commodity computing, and propose a wholesale reconsideration in the design of computer architectures and their attendant ecosystem. Our proposal seeks to reduce costs, save power and increase performance in a multi-scale approach that has potential application from nanoscale to data-centre-scale computers.

<![CDATA[Once the Internet can measure itself]]>

In communications, the obstacle to high bandwidth and reliable transmission is usually the interconnections, not the links. Nowhere is this more evident than on the Internet, where broadband connections to homes, offices and now mobile smart phones are a frequent source of frustration, and the interconnections between the roughly 50 000 subnetworks (autonomous systems or ASes) from which it is formed, even more so. The structure of the AS graph that is formed by these interconnections is unspecified, undocumented and only guessed-at through measurement, but it shows surprising efficiencies. Under recent pressures for network neutrality and openness or ‘transparency’, operators, several classes of users and regulatory bodies have a good chance of realizing these efficiencies, but they need improved measurement technology to manage this under continued growth. A long-standing vision, an Internet that measures itself, in which every intelligent port takes a part in monitoring, can make this possible and may now be within reach.

<![CDATA[The effect of Mg location on Co-Mg-Ru/γ-Al 2 O 3 Fischer–Tropsch catalysts]]>

The effectiveness of Mg as a promoter of Co-Ru/γ-Al2O3 Fischer–Tropsch catalysts depends on how and when the Mg is added. When the Mg is impregnated into the support before the Co and Ru addition, some Mg is incorporated into the support in the form of MgxAl2O3+x if the material is calcined at 550°C or 800°C after the impregnation, while the remainder is present as amorphous MgO/MgCO3 phases. After subsequent Co-Ru impregnation MgxCo3−xO4 is formed which decomposes on reduction, leading to Co(0) particles intimately mixed with Mg, as shown by high-resolution transmission electron microscopy. The process of impregnating Co into an Mg-modified support results in dissolution of the amorphous Mg, and it is this Mg which is then incorporated into MgxCo3−xO4. Acid washing or higher temperature calcination after Mg impregnation can remove most of this amorphous Mg, resulting in lower values of x in MgxCo3−xO4. Catalytic testing of these materials reveals that Mg incorporation into the Co oxide phase is severely detrimental to the site-time yield, while Mg incorporation into the support may provide some enhancement of activity at high temperature.

<![CDATA[Catalysts for CO2/epoxide ring-opening copolymerization]]>

This article summarizes and reviews recent progress in the development of catalysts for the ring-opening copolymerization of carbon dioxide and epoxides. The copolymerization is an interesting method to add value to carbon dioxide, including from waste sources, and to reduce pollution associated with commodity polymer manufacture. The selection of the catalyst is of critical importance to control the composition, properties and applications of the resultant polymers. This review highlights and exemplifies some key recent findings and hypotheses, in particular using examples drawn from our own research.

<![CDATA[Comparing quantum versus Markov random walk models of judgements measured by rating scales]]>

Quantum and Markov random walk models are proposed for describing how people evaluate stimuli using rating scales. To empirically test these competing models, we conducted an experiment in which participants judged the effectiveness of public health service announcements from either their own personal perspective or from the perspective of another person. The order of the self versus other judgements was manipulated, which produced significant sequential effects. The quantum and Markov models were fitted to the data using the same number of parameters, and the model comparison strongly supported the quantum over the Markov model.