[PDF] Differentiable Programs with Neural Libraries



Previous PDF Next PDF







Using the Instructional Audit for Policy and Program - ASCD

gation by an external instructional spe-cialist who samples all available data related to instruction, identifies areas where data are lacking, and directly observes instructional and administra-tive processes related to instruction The investigation ends with a detailed written report of the condition of the



WELCOME TO THE DIGITAL CONCERT HALL

gation registration which grants access to hundreds of interviews, to performances of our Educational Programme and a complete concert recording TICKETS SUBSCRIPTION: € 14 90 ŒUS$ 17‘ / MONTH Automatic renewal Cancellation possible at any time 12’MONTH TICKET: € 149 ŒUS$ 169‘ No automatic renewal 30-Day Ticket € 19 90 (US$ 23)



FOREWORD C - cilssint

gation initiative Project) and P2RS (Building Resilience to Food Insecu-rity in the Sahel Programme, which covers the Gambia in addition) Other initiatives are also emerging, such as the G5 Group, recently esta-blished in Mauritania with the CILSS Member States such as Burkina Faso, Mali, Mauritania, Niger and Chad



Auditing Accounting Estimates - AICPA

gation, Claims, and Assessments), and other contingencies 4 Information from reading available minutes of meetings of stock-holders, directors, and appropriate committees 5 Information contained in regulatory or examination reports, su-pervisory correspondence, and similar materials from applicable regulatory agencies



Insider Threat Indicators in User Activity Monitoring Job Aid

gation, and reporting of unlawful, unauthorized, or inappropriate activity The primary purpose of audits is to promote User accountability While require-ments may be different depending on your organization, the following are recom-mended as a good baseline: conduct Audit Log Reviews weekly and archive Audit



Environment, Health, and Safety at GE

ments of compliance status and effectiveness of EHS program implementation Independent teams – using either external or internal resources – conduct audits of operations on a routine, scheduled basis Our audit program includes fixed facilities, as well as locations where GE service personnel maintain equipment at customer sites, and



The evolution of independent country programme evaluation in UNDP

Special Fund and the Extended Programme of Tech-nical Assistance In 1967, an independent Evaluation Division of seven staff was established Its job was ^to carry out a programme of sectoral, project, country programme and special evaluation studies and report 3 The 48-member Governing Council was superseded by the Executive Board on 1 January



Differentiable Programs with Neural Libraries

Figure 1: Components of an illustrative NTPT program for learning loopy programs that measure path length (path len) through a maze of street sign images The learned program (parameterized by instr and goto) must control the position (X, Y) of an agent on a grid of (W H) street sign images each of size (w h) The agent has a single register of

[PDF] ministere de l 'education nationale, de l 'enseignement superieur et

[PDF] Information Agrégation d 'anglais 2018 - Université Nice Sophia

[PDF] Programme de l 'agrégation interne d 'histoire et géographie

[PDF] Programme de l 'agrégation interne d 'histoire et géographie

[PDF] Agrégation interne de Mathématiques Session 2007 Première

[PDF] 2013 - Agrégation interne de mathématiques - Agrégation de

[PDF] Agrégation externe Agrégation interne de musique Session 2017

[PDF] Programme de l 'agrégation interne de musique

[PDF] Programme de l 'agrégation interne de philosophie de la session 2017

[PDF] Programme de l 'agrégation interne de philosophie de la session 2017

[PDF] Concours de recrutement du second degré Rapport de jury

[PDF] Programme de l 'agrégation interne de physique - chimie

[PDF] Programme de l 'agrégation interne de svt de la session 2017

[PDF] Programme de l 'agrégation externe d 'italien

[PDF] Rapport - Oups, page non trouvée

Differentiable Programs with Neural Libraries

Alexander L. Gaunt

1Marc Brockschmidt1Nate Kushman1Daniel Tarlow2

AbstractWe develop a framework for combining differen- tiable programming languages with neural net- works. Using this framework we create end-to- end trainable systems that learn to write inter- pretable algorithms with perceptual components.

We explore the benefits of inductive biases for

strong generalization and modularity that come from the program-like structure of our models. In particular, modularity allows us to learn a library of (neural) functions which grows and improves as more tasks are solved. Empirically, we show that this leads to lifelong learning systems that transfer knowledge to new tasks more effectively than baselines.

1. Introduction

Recently, there has been much work on learning algorithms using neural networks. Following the idea of the Neural Tur- ing Machine (Graves et al., 2014), this work has focused on extending neural networks with interpretable components that are differentiable versions of traditional computer com- ponents, such as external memories, stacks, and discrete functional units. However, trained models are not easily interpreted as the learned algorithms are embedded in the weights of a monolithic neural network. In this work we flip the roles of the neural network and differentiable computer architecture. We considerinterpretablecontroller architec- tures which express algorithms using differentiable pro- gramming languages (Gaunt et al.,2016; Riedel et al.,2016; Bunel et al., 2016). In our framework, these controllers can execute discrete functional units (such as those considered by past work), but also have access to a library of trainable, uninterpretable neural network functional units. The sys- tem is end-to-end differentiable such that the source code representation of the algorithm is jointly induced with the parameters of the neural function library. In this paper we1 Canada (work done while at Microsoft). Correspondence to:

Alexander L. Gaunt.

Proceedings of the34thInternational Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017 by the author(s). explore potential advantages of this class of hybrid model over purely neural systems, with a particular emphasis on lifelong learning systems that learn from weak supervision. We concentrate onperceptual programming by example (PPBE) tasks that have both algorithmic and perceptual ele- ments to exercise the traditional strengths of program-like and neural components. Examples of this class of task in- clude navigation tasks guidedbyimages ornaturallanguage Using an illustrative set of PPBE tasks we aim to emphasize two specific benefits of our hybrid models: First, the source code representation in the controller allows modularity: the neural components are small functions that specialize to different tasks within the larger program struc- ture. It is easy to separate and share these functional units to transfer knowledge between tasks. In contrast, the absence of well-defined functions in purely neural solutions makes effective knowledge transfer more difficult, leading to prob- lems such as catastrophic forgetting in multitask and life- long learning (McCloskey & Cohen, 1989; Ratcliff, 1990). In our experiments, we consider a lifelong learning setting in which we train the system on asequenceof PPBE tasks that share perceptual subtasks. Second, the source code representation enforces an induc- tive bias that favors learning solutions that exhibit strong generalization. For example, once a suitable control flow structures (e.g., aforloop) for a list manipulation prob- lem was learned on short examples, it trivially generalizes to lists of arbitrary length. In contrast, although some neu- ral architectures demonstrate a surprising ability to general- ize, the reasons for this generalization are not fully under- stood (Zhang et al., 2017) and generalization performance invariably degrades as inputs become increasingly distinct from the training data. This paper is structured as follows. We first present a lan- guage, calledNEURALTERPRET(NTPT), for specifying hybrid source code/neural network models (Sec. 2), and then introduce a sequence of PPBE tasks (Sec. 3). Our NTPT models and purely neural baselines are described in Sec. 4 and 5 respectively. The experimental results are presented in Sec. 6. Differentiable Programs with Neural Libraries# Discrete operations @Runtime([max_int], max_int) defINC(a): return(a + 1) % max_int @Runtime([max_int], max_int) defDEC(a): return(a -1) % max_int @Runtime([W, 5], W) defMOVE_X(x, dir): ifdir== 1: return(x + 1) % W de elifdir== 3: return(x -1) % W dd else: returnx @Runtime([H, 5], H) defMOVE_Y(y, dir): ifdir== 2: return(y -1) % H d` elifdir== 4: return(y + 1) % H da else: returny # Helper functions @Runtime([5],2) defeq_zero(dir): return1 ifdir== 0 else0 # Learned operations defLOOK(img): pass # constants max_int= 15; n_instr= 3; T = 45

W = 5; H = 3; w = 28; h = 28

# variables img_grid= InputTensor(w, h)[W, H] init_X= Input(W) init_Y= Input(H) final_X= Output(W) final_Y= Output(H) path_len= Output(max_int) is_halted_at_end= Output(2) instr= Param(4)[n_instr] goto= Param(n_instr)[n_instr]

X = Var(W)[T]

Y = Var(H)[T]

dir= Var(5)[T] reg= Var(max_int)[T] instr_ptr= Var(n_instr)[T] is_halted= Var(2)[T] # initialization

X[0].set_to(init_X)

Y[0].set_to(init_Y)

dir[0].set_to(1) reg[0].set_to(0) instr_ptr[0].set_to(0) fort inrange(T -1): is_halted[t].set_to(eq_zero(dir[t])) ifis_halted[t] == 1: # halted dir[t + 1].set_to(dir[t])

X[t + 1].set_to(X[t])

Y[t + 1].set_to(Y[t])

reg[t + 1].set_to(reg[t]) instr_ptr[t + 1].set_to(instr_ptr[t]) elifis_halted[t] == 0: # not halted withinstr_ptr[t] asi: ifinstr[i] == 0: # INC reg[t + 1].set_to(INC(reg[t])) elifinstr[i] == 1: # DEC reg[t + 1].set_to(DEC(reg[t])) else: reg[t + 1].set_to(reg[t]) ifinstr[i] == 2: # MOVE

X[t + 1].set_to(MOVE_X(X[t], dir[t]))

Y[t + 1].set_to(MOVE_Y(Y[t], dir[t]))

else:

X[t + 1].set_to(X[t])

Y[t + 1].set_to(Y[t])

ifinstr[i] == 3: # LOOK with X[t] as x: with Y[t] as y: dir[t + 1].set_to(LOOK(img_grid[y,x])) else: dir[t + 1].set_to(dir[t]) instr_ptr[t + 1].set_to(goto[i]) final_X.set_to(X[T -1]) final_Y.set_to(X[T -1]) path_len.set_to(reg[T w1]) is_halted_at_end.set_to(ishalted[T -2]) Instruction SetDeclaration & initializationExecution model

Input-output

data setimg_grid= init_X= 0 init_Y= 1 final_X= 4 final_Y= 2 path_len= 7 instr= [3,2,0] goto= [1,2,0] L0 if not halted: dir= LOOK halt if dir==0 gotoL1 L1 if not halted:

MOVE(dir)

gotoL2 L2 if not halted: reg= INC(reg) gotoL0

Solution

LOOK:=

Figure 1: Components of an illustrative NTPT program for learning loopy programs that measure path length (pathlen)

through a maze of street sign images. The learned program (parameterized byinstrandgoto) must control the position

(X,Y) of an agent on a grid of (WH) street sign images each of size (wh). The agent has a single register of memory

(reg) and learns to interpret street signs using theLOOKneural function. Our system produces a solution consisting of a

correctly inferred program and a trained neural network (see Supplementary Material). Learnable components are shown in

blue and the NTPT extensions to theTERPRETlanguage are highlighted. The red path on theimggridshows the desired

behavior and is not provided at training time.

2. Building hybrid models

TheTERPRETlanguage (Gaunt et al., 2016) provides a sys- tem for constructing differentiable program interpreters that can induce source code operating on basic data types (e.g. integers) from input-output examples. We extend this lan- guage with the concept of learnable neural functions. These can either be embedded inside the differentiable interpreter as mappings from integer to integer or (as we emphasize in this work) can act as learnable interfaces between percep- tual data represented as floating pointTensors and the dif- ferentiable interpreter"s integer data type. Below we briefly review theTERPRETlanguage and describe theNEURAL

TERPRET extensions.

2.1.TERPRET

TERPRETprograms specify a differentiable interpreter by defining the relationship betweenInputs andOutputs via a set of inferrableParams (that define an executable program) andVars (that store intermediate results).TER- PRETrequires all of these variables to range over bounded integers. The model is made differentiable by a compilation step that lifts the relationships between integers specified by theTERPRETcode to relationships between marginal distributions over integers in finite ranges. Fig. 1 illustratesquotesdbs_dbs5.pdfusesText_9