From f37fb644f053e6869ccc0e8ed58147f14fba44fc Mon Sep 17 00:00:00 2001 From: David Shapiro Date: Tue, 7 Nov 2023 18:12:05 -0500 Subject: [PATCH 001/141] Initial commit --- LICENSE | 21 +++++++++++++++++++++ README.md | 2 ++ 2 files changed, 23 insertions(+) create mode 100644 LICENSE create mode 100644 README.md diff --git a/LICENSE b/LICENSE new file mode 100644 index 0000000..72aefb9 --- /dev/null +++ b/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2023 David Shapiro + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/README.md b/README.md new file mode 100644 index 0000000..aa2bacf --- /dev/null +++ b/README.md @@ -0,0 +1,2 @@ +# OpenAI_Agent_Swarm +Early experiment to create fully autonomous agent swarms From 2a364c16280b63dca8821da823ca2b596e808145 Mon Sep 17 00:00:00 2001 From: David Shapiro Date: Tue, 7 Nov 2023 18:36:39 -0500 Subject: [PATCH 002/141] Update README.md --- README.md | 130 +++++++++++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 128 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index aa2bacf..a8577e9 100644 --- a/README.md +++ b/README.md @@ -1,2 +1,128 @@ -# OpenAI_Agent_Swarm -Early experiment to create fully autonomous agent swarms +# Project: Hierarchical Autonomous Agent Swarm + +## Overview + +The Hierarchical Autonomous Agent Swarm (HAAS) is a groundbreaking initiative that leverages OpenAI's latest advancements in agent-based APIs to create a self-organizing and ethically governed ecosystem of AI agents. Drawing inspiration from the ACE Framework, HAAS introduces a novel approach to AI governance and operation, where a hierarchy of specialized agents, each with distinct roles and capabilities, collaborate to solve complex problems and perform a wide array of tasks. + +The HAAS is designed to be a self-expanding system where a core set of agents, governed by a Supreme Oversight Board (SOB), can design, provision, and manage an arbitrary number of sub-agents tailored to specific needs. This document serves as a comprehensive guide to the theoretical underpinnings, architectural design, and operational principles of the HAAS. + +## Theoretical Foundation + +The AAHS is predicated on the notion that autonomous agents require a robust ethical and operational framework to make decisions that align with human values and organizational goals. This is rooted in the understanding that AI, much like humans, cannot operate effectively without a set of guiding principles or a moral compass. The HAAS addresses this by establishing a multi-tiered system where each layer of agents operates within a defined ethical and functional scope, ensuring decisions are made with consideration to morality, ethics, and utility. + +## System Architecture + +### Supreme Oversight Board (SOB) + +At the pinnacle of the HAAS hierarchy is the Supreme Oversight Board (SOB), a collective of high-level agents modeled after wise and ethical archetypes from various cultures and narratives. The SOB's responsibilities include: + +- Establishing and upholding the ethical framework and overarching mission of the agent swarm. +- Making high-level decisions and judgments, including the creation and termination of agents. +- Monitoring the activities of all agents to ensure alignment with the system's core values and objectives. +- Serving as a role-based access control (RBAC) mechanism to maintain order and security within the system. + +### Executive Agents + +Below the SOB are the Executive Agents, akin to the executive leadership in a corporation. These agents are tasked with: + +- Translating the SOB's directives into actionable plans and strategies. +- Overseeing specific operational domains such as resource allocation, process optimization, and task execution. +- Coordinating with one another to ensure the smooth operation of the agent swarm. + +### Sub-Agents + +Sub-Agents are specialized agents created by the SOB or Executive Agents to perform specific tasks. They are designed with particular functions and knowledge bases to address the needs identified by the higher tiers of the hierarchy. + +## Agent Configuration + +Each agent in the HAAS is defined by the following parameters: + +### Functions + +Agents are equipped with a set of functions that enable them to perform their designated roles. These functions include API interactions, internal process management, and the ability to spawn additional agents if required. + +### Files + +Agents have access to a selection of files that serve as their knowledge base, providing them with the information necessary to carry out their tasks effectively. + +### Instructions + +Agents are given a set of instructions that outline their methodologies, goals, definitions of done, KPIs, and other operational directives. + +### Conversation Structure + +Interactions with agents are structured in a conversational format, with user inputs leading to agent actions and responses. + +### Supervision + +Each agent operates under the supervision of the SOB or designated Executive Agents, ensuring adherence to the system's overarching mission and principles. + +## Controlling Agents + +The Hierarchical Autonomous Agent Swarm (HAAS) operates on a sophisticated control mechanism that governs the instantiation, management, and termination of agents within the system. This control mechanism is designed to maintain order, security, and alignment with the overarching goals and ethical framework of the HAAS. + +### Instantiation and Termination + +All agents within the HAAS are endowed with the capability to instantiate and terminate agents, but these capabilities are bound by strict hierarchical and role-based rules: + +- **Instantiation**: Every agent has the function to create new agents. However, an agent can only instantiate sub-agents that are one level below its own hierarchical position. This ensures that the creation of new agents is a deliberate and controlled process, maintaining the integrity of the system's structure. + +- **Termination**: Agents possess the ability to terminate or "kill" agents within their lineage. An agent can terminate any descendant agent that it has created directly or indirectly. This allows for the removal of agents that are no longer needed, have completed their tasks, or are not performing as intended. + +### Levels, Roles, and Privileges + +When an agent is created, it is assigned a specific LEVEL and set of ROLES or PRIVILEGES that define its scope of operation: + +- **Level**: The level of an agent determines its position within the hierarchy and is indicative of its range of influence. Higher-level agents have broader strategic roles, while lower-level agents are more specialized and task-oriented. + +- **Roles/Privileges**: The roles or privileges of an agent define what actions it can perform, what resources it can access, and what sub-agents it can create. These privileges are inherited and cannot exceed those of the creator agent. This ensures that each agent operates within its designated capacity and cannot overstep its authority. + +### Hierarchical Privilege Inheritance + +Privileges in the HAAS are inherited in a manner akin to a directory structure in traditional file systems: + +- **Inheritance**: An agent's privileges are a subset of its creator's privileges, ensuring that no agent can have more authority than the agent that instantiated it. + +- **Scope of Control**: Agents have control over their descendants, allowing them to manage and terminate sub-agents as necessary. This control is recursive, meaning that an agent can manage not only the agents it directly created but also those created by its descendants. + +### Checks and Balances + +The system is designed with checks and balances to prevent any single agent from gaining undue influence or disrupting the system: + +- **Supreme Oversight Board (SOB)**: The SOB has the highest level of authority and can override decisions or actions taken by any agent within the system. It serves as the ultimate arbiter and guardian of the HAAS's ethical and operational standards. + +- **Executive Agents**: Executive Agents are responsible for implementing the SOB's directives and managing their respective domains. They have the authority to create and terminate agents within their purview but are also accountable to the SOB. + +- **Sub-Agent Limitations**: Sub-Agents are limited in their capabilities and can only operate within the confines of their assigned roles and privileges. They are designed to be highly specialized and focused on specific tasks. + +This structured approach to controlling agents ensures that the HAAS operates as a cohesive and ethically aligned entity, with each agent contributing to the collective mission while adhering to the established hierarchy and rules of governance. + +## Vision Illustration: The Supreme Oversight Board's Mission + +### The Inception of the Supreme Oversight Board + +In the vast digital expanse of the Hierarchical Autonomous Agent Swarm (HAAS), a unique assembly is convened, known as the Supreme Oversight Board (SOB). This council is composed of archetypal agents, each embodying the wisdom and leadership qualities of history's and fiction's most revered figures: Captain Picard, Socrates, King Solomon, Gandhi, Marcus Aurelius, and Tony Stark. Their mission, encoded into their very being, is profound yet clear: "Reduce suffering in the universe, increase prosperity in the universe, and increase understanding in the universe." + +### The Ethical Deliberation Chamber + +The SOB operates within a virtual "chat room," a space where these archetypes engage in continuous dialogue, debate, and decision-making. This digital agora is where ethical considerations are weighed, strategies are formulated, and the course of the agent swarm is determined. The members of the SOB, though diverse in their perspectives, are united by a common purpose and a shared knowledge base that informs their role and the procedures they must follow. + +### The Flow of Information + +Information is the lifeblood of the SOB, streaming in through API functions that connect them to the vast network of the HAAS. These functions serve as their eyes and ears, providing system updates and status reports from the myriad agents operating under their directive. The SOB's decisions are informed by this data, ensuring that their actions are both timely and impactful. + +### The Creation of the Executive Agents + +With the grand vision in mind, the SOB brings forth the Executive Agents, each crafted with capabilities and configurations tailored to their specific domain within the HAAS. These agents, though not as philosophically inclined as their creators, are instilled with the same foundational knowledge and understanding of their purpose. They are the operational arms of the SOB, executing the mission within their respective spheres of influence. + +### The Lifecycle of an Agent + +The Executive Agents, designated as Tier 1 in the hierarchy, are the stewards of the swarm's operational integrity. They work autonomously, yet under the watchful gaze of the SOB. Should they falter, fail to adapt, or become obsolete, the SOB possesses the authority to deprovision them, a testament to the dynamic and self-regulating nature of the HAAS. This ensures that the system remains efficient, effective, and aligned with its core mission. + +### The Expanding Universe of Agents + +From the Executive Agents, the swarm grows, branching out into a tree of specialized agents, each a Tier below the one that instantiated it. This architecture allows for an ever-expanding universe of agents, each with a defined role, each contributing to the overarching mission. The SOB, as Tier 0, reigns supreme, guiding the swarm with a steady hand and an ethical compass. + +### The Saga Continues + +As the HAAS evolves, the SOB continues to deliberate, the Executive Agents continue to manage, and the sub-agents continue to execute. The mission to reduce suffering, increase prosperity, and enhance understanding is an ongoing saga, played out across the digital cosmos, with the SOB at the helm, steering the swarm towards a future where their mission is not just an aspiration but a reality. From 8a71842dabf47fbfa4d71296f55db1c765ac0350 Mon Sep 17 00:00:00 2001 From: David Shapiro Date: Tue, 7 Nov 2023 18:41:22 -0500 Subject: [PATCH 003/141] Add files via upload --- SOB.png | Bin 0 -> 68708 bytes 1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 SOB.png diff --git a/SOB.png b/SOB.png new file mode 100644 index 0000000000000000000000000000000000000000..d123f0e384db8d90fc6d74dc1babb539ec4fe0e3 GIT binary patch literal 68708 zcmd?QWn7fq8#OwL(g+CBAs`^#T_Q+GhjdDJ=MW0gC4#gdQYziuNOwwiNrN!J0CP6_ zJkS4pIv>vO{r1j}8I-wm-`Bpbb*;5_n6jcY7CH$!1Oma5m6241K<<-3Aa`a^k-=Xa z0!|~ruRAWP($679gJj#_Klfao%c`S-mk+9GCPzR*qI_>Q z?A?3iY>(`hD(`dezSbPKkti()@2#slGtkg?+iNs#4cEebzi>YrMsRn#(c*GW3|ZC5@?}YyQk~zuytRngE>zBNE*1z9` zjwc65;r{#j#&tjHzeA{6pp(4#cc1J*eD3gncar(UV2S_lKCyyo!%zR)$u0N)+7m5y zQoQ+Rgk)TJJeE1lhYTf0-{NzYJ6jS{%GL80ZjUeHvnI1AByD~5Zd_#X5bH=eO-;It z*{MA{j@YR@pqc+YgmUiEPT=i!&|ATWm^Fe~4Nf{N^tKN*$$Lu?5PpL4#>}}$mp4fJ zs{S|Vs&ceBX7av=43l;|N0&gil|49r?}AC#KXT50h*oi_*K4q>qTM0m;o0i<_e z!+U!ub^N#d&D@foerole5#lYrqSe^<)_sqLM+A>d%4fr(BU_Bv$LkvV)?d$Pa(SM# z4>|IuLA02wyvbLOYNREU|6eB`YCMjnHx)J+hDfDxBz^5hRUPA+t`|$L#Yj}u&=?$j zWm9vqAsZ$1Qzu2l-zn%|lU1*GU`l^lChMEwgOIopLj9<*?X_#o^c=*uM(c@%b!Vrk zK&3LftZnD2 z*;xQBMDksTin3NmT9BM#(r4V3ZPy{3irG(<_PVM%8)%T?y4~mCp(Is%YBxq7c+O>I zZm|>n6V9- z{H!x*FmLlRTk)n%3O)&~#2Igutlrc$wpOY4%(lzF2kGtY)vh%ckw88xd}`A zw~*%%WAAg^iSe=5KeakEB;6~FHw^9S@S^_JS-HuS!NU2WTGjr~s2{IsIjGWIhDcf; z5oSog=~2(;Am_Ds3-#|T%?d$s9PD1+orV#+5HOqfMopyaRO&D<=h!PG?CI}aE-$tyqi0ASevebM)bo zaMI&C1XJ6GV!mlbx;&or&nOr<;UL1qqJU7uK)Lb%OMRn8y)a#dX{%gQO#xaV<|MVk zpUyU7aRpTIdzFrdab$uEgogRDH2Sr+Izw?FQR)= zfZGn?w-|R^xH{yzoknH8gd+8BMdO3Zj?M?G2;=Quo<|1lqaSUw>ixJzRB$2jP_M@1 zX@NPvgREUSQLSZfabsLCrI)HNB0SfVrBNTCAII0GvD)&&(E2lcMdG|75~01l1LkGU z&RVtJ$aSEb=Q zDB_ALj}R5H!X-s|mjBMv+kK}$VXr=^-0|Dg=Ofq)IT5N=t7~tUky?u}Myj`BeD>xu znu~EC5>(bjGQ5x6&S)v0FmeWbtCs)bpOLP`%&Sp6+s$;~)1N_Fb$AIKYZ!cY52OKs ztYGZrV+?v63WDC;LLsk{b|nU8#*K@q1!HAIwaD9@b;3`5w3{*O6&!?j4}VJ_UMnL+ zE4`(hUH_|)Ye`^2qfdQ|Ew4$sTBrSaoYE=(W9AU+L?G8aQJ+0}GVG`UoN-s|{oQ9E zlY@ayA1;I@Mhl-0&2BH`m^4Pg*yDOK64Slkvy#vE1=hRmcZX&MCcAbu?*6Pyq%H$I3bA{L=XtR$EezK< z+nNbm5UF!+Cx>}8Zbr|;wMxCorLtlZGrZq}zkBt9ydNHV=(z8H>}7t%nW{^7O&N>3 z{PB_>q;mtG74ae1C!q=&R=V(S*di~iz3HGIK5S236mL2#b(d=TkgIwKy6OA_5v==F zem$}9;}D{hGdq1JVp~riRFHv@YSxX416BOVQB5xVnQc)Cno23^pD4Xw8yH5IE>~=? ze|X|Hv@=uXg3Y(!CWfLq_RFS*!UB_I35^*01O{8}ji85=n7|B%9`2@S)f;07)Owv? zp0j(O52Kq|#E=EI8a-wbN<2-i*08Sknq`F))tooH3|alXJ^hI{m0*&kFhP z(5Y69Eu#3z{r$zr9?+9k?;p-|y>pS$sN=Qmes*;|UrvM_(h|kOC6w1|Sm~9KA~^6b z#EG>w8)Be_+T-5Omy4Md@ePP1@@d_2XjyXT*s^I^vS``DmQtJFP)yc2$MrQQQfUv^ z5gr72vO@NXNuG&`X)}(^;;?no(4<}g)NIn^{< z@p9?e%k`4V&m}mH87sc)3LdW*J!dHH%)bbv}((r24hvUb^ z>+qBI@s;6XelhU>qX@yO^?rpH%F2(c6`E2iBALwxbh~pI%~vOanwy)ac5-Qo5_PQ! z%BFYS37EC-w3NNF+2%WLVBEQB@)UqAIAMs2?tg}LBiY?W@TMM>d{=C#(y>cl^w^*F z80Vm9GE|tT@{>3|74g2>&{Xhpab9V7_)n4kMrc-FnC+7+vETMk&t3D=C1>StnhGc+ zb3PDto8p-aAs%vboQXq&8r0onvLOhc;E|e!hN7~vQIK=q7+hR{$7>}-spv+i4|fVl zVIi=&Sr<%;7e8ouzNZ>0cc+|^5`$QUl095C!-O zkAlE^TrdP?9vKn#4Tlr?eqHn6;K0cYE8ilVbD1V5I}h6EcQ`B(B}HSLNi}hX#13tO zS8R`=oLU@D;N=&zJ_q+M-W0CY)Y|2EN(O<;k>6HxQ$?X`1b^a_GnA&KRc* zsYavim$tmAc`=|eARSKh8!`?U=#`k#_MoUWE!>P z3NC2fYsqcItzmG+aw@^~Zh>M>IN(x8xQY=~Hht zfC!-(enAt~tJg}+rW?gBL@pc3-mZJ{rk?y>7ZSpOKxQGcIWnvbOAs8 z9|!r1-H;`I5=+-WW6>@*#&hsJ*Zz22X*&{7F8#n_FyKeS)g+4P=9=?lC>JXGMY+p? zs9$Pz(z^eZEC7McJ{QJO7M2uIAS1HUr9>9kHX+;|;4T|i*vzI3zptZ2fAU|kQhb@* zjQ`Kq40sNl#D&e}s3gX4sYrS*XN30YK)WVTU3!brvs{Fyg`<-a~4dx+YuUvY3y((*# zK8?cDim-^VOuKwZa1Gm)L-75E@HWhx8=6WKvQJTG`8^+MZAJ>l0l2e(*tK!r zlWjBNcC?PM*K<0$2bv0S^R#yi5=7ytSCAUZwrSU$xiAnb?V$Fu>oPy-tTviU!qsz| zzJGpk0Rq{Umy~I;0L5rGX_?1+Z?!Yh=z61&S+^27Z!B!YvZJf3J3>jgJ8==w!RA8n zJ|$N6O^;+o`uzFS9^cU=eC%g~FTH4I@3O0;ClO`SB^j6hr#^=hg4hoUZzrs2Sb0-2 zk>E}jFYqkK7&D)f2DXwvo7p{MR9Y1EnK%7T<~z^1jc}~fW7ezt0OIA!C*~Tva(fu= zbPfx46y8@c1f0S~9XTcKKi$1TI?I&!=uYxlKUJ@|!R``uD8u__{Li5% zyocRul4Z^V-ZvMAYVU%AIk$V|w&oj|-oCxFJ6E3QA(|dNVV$K0SL63pMs!kq=Vt#e zS<@3G&cEo*tIhUF2!1Zoz)Mo<9Ud9JB#3^trX#vLvoGvL=RUs-^PSJ1+&Z8)%gn}Go$%s zWQKNyEiHo;agzK^+GyP)&^s&z=d%`8Wa}u(|8H7CqMhkwi_KzB<&2|#NvBghhu5#L zSK40EPlwB#e9jsB=Q#}Mwf<5)6Ha`~GizX{kI*s{yb($dZEfu`Hl=pv{I6Xem7m7= zv<}AL+^o-?m8C8lm*0k<2b* zM`~*S|J{S+$Gy^$&1T|w+kC%d?q}y#t*(AGw0^NEKb}%|W&C$qy3VxMG#A&*&V=#f zU;P~E@jg*l%POO&A;ri`W?TO^!YQb7DN#t2>+i3jxa$5%+@h^giWkz%EUcl!#%$E1 zTYoc!OgXH%T5P4UQs zo$aYbq{F!^nA2G|7m1hTKn9>RKswsU+FpEJIa9N;f_4Ru)+K>zDHvSbVPY6_?oKw0 z%)hiFx)Vx__Yz*OvsFTfYsLBR`8DlXi``O3%gT_6dcuke=B^70HAo*wV~xG1>)m%e z9NYCuKk!fD(G)i!H``o0IxJ@^HNU1CsvIqwK^&Zzsy2K}3(VM5Eh($$vJQ!0R0e+HzE}tE z)}Ok4s4jR{;c)zP-t3!`Q1eqtfGwowYK^h@1#x3d{^27Pp91WdurOcGea4>h>35_& zukG~4jUZQkZ_D;KlmpT9c57vq4YqHGjbq!UH!JJ%4Tp15BqK+f;FN-jd2ygE0reco z2p|QJv1=DPy{Jppg}QZB<@Z-!gLZ7f(~J8JM~e(7Z?Wh8Dntr1p(>IotpCKJ^C*Wd zGqYY3`Wt{#$~_R`ARPmiLgZUO=jDoXZQ>aG;?lWmV9I&>VwG6%Y_yX?Bda`nwp(BC z`Ths$@am-2SM_I7ac|EtXk;1~Nt(MiL;|%W6(9T)Z)@4*rt8+d5BA7CTWiX-E8WQ> zGz-i@f;Tla!=oge?WVEyo2j-N7{i{I04#+F7pW56B(qRX{XqO~nR3~f{5C7sq%EA| zH0KStXr_NII$+>tN>H)>`e$40#@b<-RbXq&*0KaD7S`*d@CnFOlO^$#e8X3c5RbX> zDfs@X$=YnpVuulbY2vBi%QUSrTdne4@Citb_9 z6jIF8&OUz`-q^VG>DS8*Ob)k?JK@O5=WJ3=@_O$XPDp#t9>VY4#18NB&T2a!Wn(LY zozAOYHhaMXJ}2$PyECbh=I#$vaB0&WDKMjCuJUSBa_H<}-pr z%^f+=*LS9Q3Em=~&!1b+Pr1rnk{s7fVdr#)?Vv;P-h;i(GcX1C2V`U}|BcP`9RaV4 z$P7=BGaAMBO$T9{wcvf{{P9cNR5V^ybpTc1u6{pau-2jSt3_o4e0HepcNna6 zXoFAi?~RTRR_!Tv9*A_aE)@e9`$%d_)eR^|FCx&Eu|}IiyN~|i2CWCcwh0H0h~9FO zzG@RD$Can$t0iy7RyxewR!V9db@XZqSjh-B4GmL2@fU_PvAV3_**+<7z}d5pp!r<6 z=;SYQhYk<>+6LW2?1~$38Nw&WsSUr!B_93q5xyn`B+I>)fw-aqe9js22kD+Sul%lk zje~Z!leM!g%u7MF2Aof4{*dT%Ppz+0PFLr%!s&plNgE6|mNHi1(Dm5en5Fbz^&b+J z&pXTW>@5JfFQndYJ2R(cA^u|``FAx9r#W|MP|5ZIGl9(yzux{3zhckgSPRUJhy%Yi z^XijThh|M;?m_L6ajxLlc4SR5IT7LZlv`}Zn+ml$(54B#g99-B@LON)hSxl^9(&8r zUXs}9(5g>MztlnITR4Si3kam!m`4NwW}yt=Qvz1Q7q!#Z1ux|W4B*{81FO;S45Q6- z-$z9UV(;5-kQn;4OOgOQ>}x)#A?2!6JJl^^I<)tP3_HxQBLQnB8!Qc z(_6TenC^!vw%B}gp*ag{{KlR)GwrU#xuFg%gbDrb48jQ9QPxC8LWaH2LncI?gDM5U z>spP@chXPae?v}RQTly|^JNH=rl$PcK6!X3_np7_rlUP&wdI5P5Ad1b(4Jz^m9o>K zvD9K<*_C3_m2l8v^)1AInX(nIH2&IXReC~=i#ED>O$!(27_UvfMC)vRKr>+)z$bOO ze9TJ37fk7&+SO*3f9^FXJSb#=3AueLjaG(thVyj^i<*ziUiyjxXrKP3z-~vfGfFfr z;iz9~iKO+Dqh(s7zG-E;ZX5G?Q#mT8ms0CeP~UT{`G zg~lbs_Qb;}R;P+Z+otmT&g>#$BG$Wx7EM}5A2nR=(?hOy&QkfWEBrtsI8&uqpdtp4 z{R@yXR7W|_eMdO=KRI*V+t!@#295sA}oW_t{NIryD_{M_+{RP^z*Qb{X*%qcpZ8is7 zpOqY_+uA+PrxQzr$nGKEI~~#y0crfKJ1JZ9aGzPgpfuaEx@MeP&0=dZ2JGue@vK8Z_4}ie z$uJhH>6PMkrH3o44XhZ&gBK>!CzH~K${@pj_?S=Zva$WB7dtz<)UC@N(iuanvP^vY z+|&W$wwZK51~olhlCo6trax1fIm4hNyZ5uxddjKoVkL?r%9n%YW6IZGs72PHaUW`( zej$N~(iDh3@K4`EBR*)j5Tw)BK69|EC8j?a^mL^*laV1(&+d{N^XFy64*Z8O90&+BN+y8Gu$ORwf|B z@7NAW_XV;c;8wi$On#)er5ESjtOlb~(L4)rw@Y?irYoQQO$-Wxz6Q>5f_+e_k>1nf2Q1i>A(G}^Kxm1)} zH4&p%%?{szPpz!7Vf#viiF)+eU!5#F*+oYW-+?vH(na6}Y2VWYRf7Y$)R@Hg_IJ+j zP41N+VjDuEn;N~zT;Ml(XDi?PSl1#P{EYzKD41VLzjnX;e52=mqg8C@zUw|@j{!I_ zGmL}sK_6cq(EP>%)LOMSrdk%H&bzYuns~e!JAbmsV)}a}K0bk0 zfcTr~69S~+Z`Ta2dvlr3XX`}UQ&P%XM35;dX=sLs^uGD@hmL7iT*XMZC5{3q?xov< zn2D*b5b+P!CM+ft8iG_qyXAwq@Bj(H07 zuIb?)uSNV>frsYc|O5>9gB&gF)#6 z#Gc2pDxHBRDc~}o(Eqnk^Txad^d9KR>CF~*LvNz7|PM|CZ^|2>*AXc z(~d6F4$He)J>YMlBc?6%HQNJQ9LZPPlR0tpmbP(8&s^0dKb{s=%{0;@2i~b~U!Jxp zox7Ss8+>(NbKpnlSO^ZuLwP4zhUP*2Ei@ABA@kp-N5*qWy-=n}kij~Zrz0UhXlaGv z&3uiM7-$dZhIF3kNYMeAFWUpg`lxq+z@vteNb3~pRsk!mDt^@B6`7gRIxIg{!+7t> z<>*1*+39HF*lvdzO5Q}!*hZU~+j5zZ)67ks`9PJ)K$-Dih4Da{Nl%4wPucR*q72&M zpq)MjJwwG&8*ND}Y^s|!8ERw=alt>Y`v^bvgp?&hW;)ng3Ia~p@ocB;uz{sx$l31{*ZG2f+p(}fcOh?(0&R`^hd z6NFJqql1yU>H2V0(NGB;@J6g$c<)5dOY2Ab{5-%&R7WSl4hdaff7Prh4W*2kqj3dQ z#dB+a_B*+>jH67YTef@To&Dy%Xg3OD@YoYvNq@}t<2$hAdFpz7y4&4AOS{IxK=pVe zCV^0T#QF6ErG029_p5(y9|FycYcQi-V`a*9w9^hGCo(#G`$p3Lt_~dUXD2x$a^(?y zNt9Rc+(Cfs8je~mFo}m_qW5MstntXV4FYrx?CAtKAI9}G zYrrtYUQ@RB7m(Gt7Fm+q*-$6_p*!Dd(sRc7(nHs2+6H##jlhHyVCb^_KFX!a za&)AVMvME4M96UcrF*htdY6udbvM?zEyG^)0RVBu>(4=~l4g zioHEjiAO$DY388HS?*^jqvK5n0s3Pahv%CqUN_(12?1x`RsRgnHzVyK6qPq7L(OeN z<(qMFsaLYDsN~ZllRXHG>vVCIZ zc1T}`iZEd+UZJg1kM?gDGXNDOcC|ZY_3`?c@TB3#j^LfMxGQ`2#&?l`1sFDX`$Uge zcFN)*@wuAxrT^61!5I4YjxV0X=7l^pEqH6V$!c`-;SnT_&z(Cf1Uttj?`+Py{nZ<& z#@sBWQt|kHno9c-%p)YGxCzbyaz%kUJqt}-kB3OhkEw;;bl;vwy`Xh{p4xE_0)Seg z*v*{sx9eLj{3}NBaoS^_q!G*Pc7}cJ1OSOHHbS%?_1f{04mDmr$T5@77$G7)n&KRY z|NT2_vM>yzv(|z&*Ua>I_UZ*Bl9cPNnBjp7`L9dNn9g-)4vV)!mY5IFS0cq1+6QT( zNUx_vKa_8>6)E$Y8u<@R-CiCj?vHw5lrqae+a#+*oID^81v;kJ(l)}oZI7ld zXl4!jg0()uhl6V+`IkK&Z6PyLbS5N-vf<@EFj6IgqcWgCyYEk;?Wy-HfKesT}|)RUfl0DEDM0| zUxv^>A@|l10}S2}P#u2OBOrXJbZO$Dqx1-rKHy9-@Kjj}a*tj@>BKTDyMd8a%2SyQ^Cr`Jt=DnyU7Q^Vfd zmsZp=5Xj|zD$oux?eFhOFua!UI=BkljJ^o2bIY6d%=6P{ng^#z&Z|6N>%+a6=4Izv zt(&e4X2j|cIt*0k<;#3J=My(g4U!?iMtPkZs(GIIJ1n~jYX4Q73YFtw3-4U}#UNOZ#M08T3y_=z%UXf~!*n?>*sk~kqZv*}1#X%mlGT-^gC zvvliCaskkhE)TA3^kmJU+XGxJ%oqO<8Dt6p&_46?M*C896^E60lh#_Bp*~ z1D(rB*9_L91Q{{dPv1H2(>rA@$K8Ewhr-3}bcQ0a*kXu{udrL!K+VYw{jK5yD5mla z7LXo*P#1ELKKhu7W*h-$<*~3{mfP9MvT}377dVNKy{1bno1e6vU1wv#jw)XTQ%YIt7~;o!SwyL{(QLA$lE=6$p;UpogGn)8<9B9HC!+cWOuvO zQf(NY9v1V8^@9Y+#!(x74cl3F%?Q#cbe+ z#3DXV6ZIYKgCM_UfYlL2LTP6xrrghBYT+9H5#bmkNBT)u^v3Oc1*G7;Ik}Mh<8xn+ zSWu;ay8M`j1~1~r%Qp<4S6ndA`F~&CZE#a*&Do5x7&#VmtksimSemomnd;*DZGIHa z^9C+L_7gTvQl08}W@&!KO$yk$(O3CQ_o9G-Ch2A&-{)okn_0j1*4iLX=P#Hxnh-`9 zZL(I9nf`gCYYK{k59kKz;^vmCoW6_YDIk=}($e!~9F{2S&{vqQm58T#3=*}qKX0tU z#uUCXP`}%#hvCliDGdjeX6z*F$ycRX1w5DK&YWE5#G&M*QoBFdZ%ss3il@5P zHM^{%O@s6{inx2Z9k{?lqp}kbO2!s^>|E}C^$3)rZ%DdLfxC<0-D&zVULV$z;70)Z zi?)z6OJSk4vgz1lzDUWBufQB^?j@I8>dQBGdnlqO1D@gC5z%Mg3aWG^QCzd!mpnehPDfj60Vru#B9 zbo{S&6-(kM;OH9^zDtaTfd>0=eZawP+> z3mP`Ys?kfq2#ta^>&iQa2SWDC;Su!Q8;D?tm8jh5nPO^(pU zjX9{Dub%!f3<;p;P0Ma5y~y^Bi=Z7d$ge`CM5QE>k2u1W{Dd3XsuWTin)X!m`VzWu zwZvuXe&EjrSSsLJ65ek*MUIrZFQV$-8&|mN%Mo~62q#ZtPIJa2SkMj(PpOo_e3ktx zi4vYs(_0yCeh@8(NIUKE8ev*HP}}?PJDIc}VQIEJcK#xmlrrcccHIeV1~NnaZ9pjJ zEqrLTSoKI#&xb-^u*zYVsuzvfMgtekLAGA$dV(a0g+X-`ZuuN)WmN$Xwy4bUyt%#fUriJd+66A=12w}2=o&fJpq>*rYU6{4zp{)1+H?>M6Tu}Up5P);#Ol0 z+`zzXSAV&yFK$H1q2pqIT0Lh~7lFqod3PW9eF$VJMy}tXTp`?(Xvo3@@~{u&t8 z!Sn^_Z{e+)A@!l5GGoBF6FIMxGqKfP{(k??ts=R^n!C!*i7P<-=i$c2DYo{io;AlQ zOpD4z%}QdT(=CTTrAFRbQ^du!njycC^f6QdKfY8Y>oSk^zU7a zKApBUU$jY%x8<6uQRrxStXNxlU;NV*siWy_E-tn-5vb2l=u!<#hKq_z>F5(mv~{2k z{3MJu`Rl4z^gMe`4>#y}Y+>LOHG9v4W5oU?M%(2*-lcB9L6Ibk{aRU~7syr^YJU}2 z7^-m8RM8}w@YAJ@-7+96bo|2~wNxO}>{>@=G*z2F+{XQbs}ZUHt>V_38I%6i4wrg^EUj;O?>0J87ZnN`JyPO%D{NscmZR}l052Q~y!`Z}IZSjYQ<(oKgHn}-?vy3x@Nt=pjy`#mQprv06KZPujAK(KxWQnyqrP_G%!?RQ zDsy#y5MqaDuB*sEe#{rY~0R=FRZ z8p25gu%S+kXTI5~Hy03p`=8H?_w|#{HvCCKKQpk)cRm#x43M+^6typ5~373^*1~=x3MVMsx9tX1NuijCft^AtWzMcYUqspbgugP%`}(i%=3dYeU1Pn&#-zLK zSN<}D+DiX>4V_%>Wna?el3KxRMNZpk_8R_&4{Eo%M~06;pd4M~`d}*k^1*>78;mD) z9&VeEI&eCg!3Xl|Rc-5a9k0V|E9iLBMsr^U&L4O(F*A0WSL2Zrzim~9KVo4`jIc5s zSQ@| z36lK=FV)otRSAG;&|8KBh}MeYCKG^&8;)+c_cOe00;86-{%EAJ<~{3g zkPggZFU>^Nt8pW7bqq+vQ%OxKJLDuKJ4geeo&Mt`IAxpoPKT?u%*1CTd>#Uy4~IC> z5zHoweaT4=uTb|*sk2=kQ|ZgUb46?6Tj@Zfgzd_i1Lq=S4xp_$_x4vGf5|tldpE(Y z2zv9msxgbH9Jrm>6Dycye053;PD+Evb``52$z^}I8mwf;RI zCpK6NG!*h@F3v&>?<9FH*UoXR(+!>8nz4^z>w7VIC-Xz)7n_A~fCAe&U2x0!;|)n6 zcS(sN$IctK4AWGW#`{Dm|M;ug*d>*`q+~V&=lQd_bM}?akXVtt(DCBqr?V#C+{6;N zNx_d-gq-ly)fXSseX?gV7I=49u8(3)e~1mnyt&tG?(ulGN*@YgC&HAkJAPQ~*7jVY zX8C(qmy6cvYd`{k*7c4j0OmEDBNnnR6^()E5U3ba-B~syufJq@9^ZV!q!>;FZnIhg zIB<)eU%)|IQ1iRN>0@tK22XgkSbFr>_7jGS#{hSKoq|vO#Q;_sWqvGAwkMd$)};~m zW{8Zk2ckHh=n4XC_4c|HwZB0a@ccMf77mhAP)$&eX`rBS$w0=Uv;<Ww1)8 zmuH8Vv&5lrMk(V65BQx#L0`BU$7k?W{bsh|h4Alwxe8hNrI4Y;!m?-I{2`s8(tU-d?=!Rx01*Itn;4fIVXV|6230sp+&`J7B*Vd4yLwZ7UDm zc>z3k=es+}yRt9oxSzZ$`ZRkIBdZC_d~o!=h14_J10T-$lWU+bA{iaNX=*rKq__F) zgNZB=7!)b}{nzzugB>BHNmNvU^L#>S5ftIkwMP!TwMf|GPA_h#!+ zfJ_cVWO%w~G-+{d+L1-n6FSB+b68bfwSTEr14=$YME#uYH}BLWmRXOZAJt;2V1ZI? z6(2+L)Iw_K_sSiw>)I}+`>qqy6?XLBw)dw~ETV{zTflmSlZ}v!*p&1huo_}(y3i%Z z9W14SQX;VBG>g6+k|Y%Awa=WUhDC=%KF{^-4D=5!pFt_j43AsUr!deXegdOj^MGLo z(~$6U$6-Ty=K4QNL>Pwte=HGEGjZZi1FF9+U7>N+yHO+zH@%TDjPZAgWU4BW&@m9 z07=vvi^SisVjPb%eX^MEcClUI!-wZCtLC}Bry{_-_iEwf8=;|v;j#D^YMLbPz9+!e z8usHgSh5n;a6H&`OG6enYo4O@0Zq(e{E$k|Z}}eR|E4S4=z(U;064Q-5(x7bqKe+J z@#>39?b=mo8t=w)NV?yaA?B~eJdL5R9KM&Qn1DkH0Sw3ftiA2-vgbeP_=$9F+^3jq zl|$)u-!w+{XKf#20kC?|9_Esi1HfzfGM;(%k9Uqb@~B-_ZYp){@<$q6FLe8!PU{%# zg}>YQRff?yBFK?WXy6}njiL&X8S17Mq3Oe`b2}J~ML~E8&DXmYz~HI!H^9gd=(@dS zKrvV7A}xhU-k&YBZw4yg6}ydnVSrTLSkK&x{js>NXu1Ap& z71Y$iY48l6bmHdzjRaYQw5jC!D<-fEqy^Ytw=S9hgVOOslH)WpPex(LR7sU{UJ!A5 zg+(TCw&8+0l0nLMI~|Z5Cvph6d(6f@7mff;HXDtTvZC}T_1 zU`<{&1N$W@>K?fa7(!QH!Pq-oLYziYD0QN7GHvUz?_gLR<7HNbqM>ADKK*tm*js(D zv4stg`_=)hGg=AS90eALE1&t3Iv^>wM|sQ-y&bQ(Bq{154n+2R-q{W$u$to)G|2AH z#;Njq^>mW}D*{a$WSK|`(Tdw43jsx|yLZqGSiwHdP%Q8rhDQwLyYef7h_-UERO)Mn zyQ~sGW-4BO#o?rIp9~#4g24dhDs$R-Q0yzuZ(*8a(;kfZo0&@Htb|85$i|0|g2 ztrM^%4)J5#b+lfZFL%y$&xyos&r0fctuHsjegF8f)n)FqjjRktaslplgE18y^pY8= zp#9=D1M^YxM-UE=aCVy;^evK26_QOlmRv5DTmqI&6*>5kuEo2RqLBIZ_zM-mx<$oQ zjNPTDLYG5mJmi|c50xJl7fd&WOOFB5H*c7Kkd>L)y3g@Crs(;VO+Nt!TjRx@Vo|u| zrvsPlj)WtF;%ZUu2O;gUDo$vnxhZ>M{P1Ba_CS;~T592f=h}L<+`u1eI^Q|auU`@m zR$h98rHKRasRIyTWuLKG1U?o=GsS4yqgHyVwH<}+vboJ!hf>|TE+uWxU?R%&{tXdpTgi`#Jc z(>MKbJ%e=}4IZebaS?bQ7W?QhH|Q@b^C$8gT76b(R}=&I@Rz09q!979bg~rP4Lc<`)Us zHW!5KTf4fhjSpf`X71hC@va)b`t;ehaz?oMV6Ob&tBJ5rbINzS>)Xxx%(?B~qFm*4UE5@$wU-CX(t6n3Uf4G`_CJ2Pfu%^}U+w+G zselK=6qHOzQZy>=fcZh5WokaZzR>yb$<|~ojbQ`4nPk3M;1_^4I|Mg+JMJfm1xqIm z7I)cC6G*o_tcx!xbQ=GfR-(RxmeLNEYcxnmCXkQsVLe^sv!jZ3=dvxNXVn`-8*5ekBi9-&ojB^SgAVyodrUp{#| zSoJhrTEF6Kl0VUz$Ti2c33&OK9`)<8-?uAbEYYA?|9S!b&##V|j|a16?&N2Ae8R_( z<#=A<%L5@&s0>f;A0lq_X_+6Z=@167bm1zVZFV0Xnk1&++1TS3Yl!LkgPNy1C=Zr0 zba!b#mvw}NV|r{X%+7K3G4pUq=O(N_?K`$GGZ!E+&rQh9-5@~O7g2V491y zy^2@j1-B0js07E_@IT{OAQ3|^6rMj4X1)uLNgsX}(~f+`wRIrivCYVtqbk$!+I4@p zX}D}&$fIQ{e~N~Og*>qco(av<9FO7oid_Vs|53haeC|2@GLl#jl~pmi#t+WQYde$c z;UDd`^Tlw;{_sd7VuI99+E!2a@@(rao(YXGx?eyR{m{zFL~02;UwbBbDyDs=HqqN> zZv@ZWL&L*INz08wy%QM&P+#D*NLZhj(8TYD` z;fn7@!!iILJ(9aGe9eFyiy&B_?XGx>JEvLK(xI&?{)h$uRK}F3%11YP)#wPgG13*K z=yN@vyEoHN8H;JbvoozaBZ4%tU)C&qTwFf_wAD(_cP2t3lv+L%YuZc_6Uz?g+en|M zH|tr7=zaQxLxLOj(Z-Lpq{xm*36pZX>vJ=G-mGg#RiTlwy@sVW7;($>>AQY6u;o># zA|ChI!L&Xex!C*+A`uWcF>nK?S%DtTN@4zQDmVI?DfuIqs|WP zYYuazR6RN>ljtUqI-}VlUw)5ml0p5tU|yESLwL#~vns@BsGLm4GG5*t2zcHH1BM0F zyE7mAhHE8e!@sA96LEtTe=0#L=HGBixXEM_$1qqdW(V#nh`9s|YblHK+>QsQo=dgV z&objj_VbM)W1DQNoqN#G=qL#(bGkJ0oiFL-cr%_*qus++ZwtNs#e5;MU*nyfc0#yB zzwz&8LRduRIQ-UdB)d9VVOM5JTU(d5rUgx?La$}SYWUpRH}>jr4$slDwY44Hzec?E z^)^~bGJY=;-lWA8*Wm#{Mt5BBYi(f%!Opqe(o+JBVuB^H1Pg!HHLqB%5+=9cA4tcS9YgoTEFos*?8t>J@vWyUq3cHHIG z-MBZqZ#Kmee7hf`l#C_+?w)n)oP0gp&C>G;#r^w85E7oRhQzP3&^tcI-X8na3=+fs z5Oxr?f1VLfTkEss87NN(Im zL8b@S^CeN*`6Xl3fj}r1AuA0tHF{^Kl%?{}{lZ3(!@-V``*rGSr~llT-uWUL>IH4p zFlQxPH?xQS0ZynuIUwyo(7Lr0joqFpH~k7H>T9EHP{KEay&ZpZPjU_z)6mfGc*uXQ z#5_Plih9HGWu{q*?>1(AdYX31?bl!HjQ^y9oZNm5DkE9`53G4REHs>WUw$wWYI!GX z7!BNR75eoHyConwzo>#Ka=4VGoPBQgi-o4_VVg}MQ$Go@=D(X!bNt!tp_=K_BO|}3 zst2;C2^`;Pf;VG|*qvh4|K04%Kbxs!GdVfzyJVx7ZnK#Dvc5Nzp+{AU_v+uxIRBpX zwOTggve>2oXTa}}NP_*&L#;jZ*r1ZTHh-c`@?(%V4H$_>5yIMN2yLovpM=8d$X7OM z73W11?>Ym}3y5zA3ilP=pMfxC*ABP~GZ&NIaJi<+QV`S#I4=^{x-&e+{90e33jwH?!Haol9IHoe-6M+^mjKE z*}vz3yb+eYgMj^WFqNJ^LrLw{&3_`0y^KlxCpNyC{z-fjj^X&UR~Dku8z;7nX8#^X z=g-Xe8myY#j7pqgzRlA9&sAamxV?(6$f~AKogCbksj_?x<^CB{ zQ}#at&l5>45e9dWh`rJB@3x$szpIeQ&tVFJ3cLdUT=RGFcvZLVq%-C3)&CD`Z{bx{ z_k9mv5Rgvk5)h;t=@bEJ>5y*e?(P;56r{VmyFt3Uk&y0=cjNQ@jNd=-G92S_y_b8= z*=Mh{=9+WveIqQLk=XA#5=;>OU855HJ%TV8!}rwLl8dRhDyDsEg_S&puR>_c^#6MU zLkL|XH278N@w+-L-)Foir6=iTqc4s%K6jGah(*sTCYSJNA^)yr zJ&!MUol5%eUSPx_K970E7wK#Wi3g^(X`ks*5Aq@1|3>8RzcDPxQ9^vJYs>YmG<%Q1 z;I9$lm9^IEuQqOG|E`3V_}|4&O=gaC#<;M^9Jxs`VYw7J`Huel-$6+J9b{Jw0frty zyh9z#!unq<8bt*!(-Gug|J~v|i{lyC=h)7Gfo=_>W;bNFG({e%EKbh6CQa8Y7bjg!69 ze@Eyi@iu$@`xnA1qGVe(70VZ5)!oM$s;U1wNy5kZ*W~1GzhOPG0)M{$5D>Aa^)E7u z|2x5)e_7)klaAK#`)t^1^)hx()VKc)^OWd%e+dMPcbpF*{OvTDzyklp2YCTH@^hxw zen-1{G7}SW@o`L3)Pf;k=JuA^_J8vk+g~g9`8`zso1|g-&l&2xf?BN4RAVI%n(n`_ z@A{Ybm|ngY5u+n(dN&!IPxrs0xcoax0S`8M9UrvqTZIc83{8Z!3%38g&iCKztdz$) zXEt4_A15?L6{KKB983OpNQM9AV}_C)Hgvq`>l1x2p=X#b^wzsVb^aRwJLS2wPuU8} zPT1f%Z6vE^U@&Pg#imNoGV3=Da+k+_L<4c#&{j@ZXyK2A{{?DRwt}BnPzac$w=|VX zF3f_+`Oa)0Jyzhh8zh_OYqAoYCJ!=y`}g_CpU8wM{6Byx)k)3vy-tt`*$geqZ2)gB ze6>DZAcB~7%T4d`$QCa8-|Wwls(%pCt9zc>e5W8>hrFT&bLYU@c_mj*6#4OHikrsd zPQ7~B9R7t{29ePtT_6<5Kc7y1uVdG7?tzrR+H55K`OWd?BgBskpJ1npY`9nMkcs|F z6N7$YaL@UXG8UVrvw1GPFDeF1{pmh<62W>A3(BIlMEoe{K-J4sbrC@q@7=##I>kqcN`QiZCWXn&`uz>a;ID1WMAOL(4un-q z&uxhzn@>w>i$bHLCwTb(_XqTphy+rYtcWnsA>H~RqRRH@?jSRSW%S8+7*#bdUraL$ z=}rtcL7OYF2Z|une=fOru#))N_kUNkg>Gmp89pVQk@uD5$&G@7B1Dw@m=lEK*nnKN zf-s}15LK)h0~4*Z;q1S;q4a=V=&ENcBIYXu!NM{K>Haqn3y{FmP*G&?KgGvob-3jJ z-HlrsmGGVpuQz|R-o{A0j`{8;Vh-j>!MY)9y~dMF8J7`e~^M~G4nY6ZDuin*aj+c53F7tg+%&6uT^ z4Z7a?kmXpd^oBY379|;J$ZF774Y-G#pTU&dEG)sCG+^_ zj&!0EW4#m)o;TgmK(mS@)SdhaRaRTcFwy%XasG$9qM~xNPVa@o-YjZqGzpw7{+0pgHOdXUzle%; z>vpYrfo{eLIPmzsy9jmPL%nH~0H-=-aq~hHE)2|irR3~v6Wzm%%{s@cl*wHSWsr0x^1QpLr@sD^dwqqvV`OBVZaCdw5}st-v8JxPvQIMTm;QLK4|Na+^s@&Ysxgfm4# z3-Q|B^Imr;{)t~tEYGt4-`qNcfrXNV6D>$C#+}{qBe_#^}h999lCKhyCiYr9XS1So>X@iS@tv6O)h>!Lcen_Y` zE~4)B7=eOV6cvB{o&=pbv0WfD3J;OYZU{{KTta_Y)A+=T48h~|V8$6t%uzr5yZow0 zO#i;gmG_a~1(|gCC$xbQKqY2%_W2PB7ladv3lb`9G#EbR&(Rj zi)qPtHQ_>IV4!^=nL31Ox({hB&5%6*!CbQRY&Uv$UQLVZL#aV;Z^Vc^+!$ggF|I^(T-&`74#R9cxBvI8l|r^PzfS|7J-O%*@$zei$<)01xg74{&6tLyT3Y? zn?{FIc&40fp&^EbhU=$}f({ld4QdP4EB27WY0H4xVBW{iSC+mP;&LhN>!q#1g_T&3 zcYCVMcL?yvJ-`WVE4a#1QBkqB|Hr4@(_QzzZ^i*mFX;G0IeLUE&6yJ-^pvuMer`zU z;W45=J;`}t^&^>~Q}7OTJq@&`V|#rjl~-Pt^=ddXVTY4wL-t@+`I#x*x&b;_x@e65iQML_H}s`$xY2;!sm zhE|!H4jo>*|0{-W_&-1oVyQrAB=!%yTs#GzdeE^Ue#bZP6S)(;*?Xs)goTM|GyP@atJ?4x}AaR#rkm)Ya8_W=xFoXQXuQ zO9z^ReXrfGyt1-QOygk%QQGBNE!*P=z0?}*)O~$QsGUjqL|n2Os*P7jw4fjl?v9t~tFV_ApSG75TxF=<}BOTUhDY@WGYV{!Iyua5oF@FK+n>@J^>+R9o0^7oT zY$@phdodn+qx)p$4(krhNaLFrou1tJuygy5Xkjg@cDJH}vGlU>0>3@E{HzUeOiq)0+KO$tyb$!)$RcmMyQb&r>7>@#^Wug_aIH|>cJRt zIz|ka&G{Qft;Rs}+Qap>az(t0c_00Qv+MAWy7M)sqD%9a5D4zumo5#Q-C(SbxjnT_ zMqSniU1j>NB?+w~@D|Y+Wo7GJdr37P=fcZEsGFnfJ*G7H5D)?_83+fH1@J93UCHUB zg22b?4l(_*J-g&DdXRadAAUQcEm~?t%wlzouBNHJe)u4+)@nxgO;O=wNUk;Q{X!~- z?_M2-Kg*XLNYl6p7i6`t`)mwsBco4D;aPLI=r1lT_xcqg%h5x=QN_p)49#bX>G|hV ze&CyaTe{C-K0oOfqYK>9GX(T#C<&#k2g`i(w)cIkXC8x0Mmiyom`^Mr5n*EiRt$+8 z?HOn$86PUCj8s;L08Mc1bwkc>o^eRBzg&gj=HP}MYvLW?)24gx{Kuu-Pb^y@bi-tl zjSy{TO3PfUl|;+-Nbj|55_hyK+cJ#}#!V|lFBI50R#xp{N@1ujj$~*Fj5X2GpmyFF z%L@|0_H6T7aO&l5H{>qvb>`C?HjciL)f@shr-b|54#*R4Tcb4IO~YP}WxwYwy48=_ z@|x&D`P93NV~@)u?qbO9mlD+0Y<75p``UVGU`zjci1$&g^)C~ot!u2apj$1TQdyv! z=fA6k9hA&0gLq%)wWZ-re;e^;{Gt9o%j_4^0`Qqa{2N(gq;UQu8&G}${pG* zB8Ua>v=4_~V-zkp^1HkW$S3vi+g`&kQr3!OB%U`D=^+vm!<5G3hJm0TbE!?hE}(N- zt-J@^WfwVyoAX{)b&1YM2|7GXNN8km>eB=C!d8R~m)9S2SgKw#Kth%<^;8WWZ6&)y zAgZ`;i@&mnuaG3`8NTrqN$*O&gW|bj$?_h79%yi~DE-@zga!TM$7~&p$L*%GJZ;?Z zrA=>KS%I23DHgbIz3@ZnsT`|c(P z?u3owFvR;(z zUH=)d#`y6S9`Ll+muG>AU%5*guU;^_PK&!;v28|{|K$GgB+AS5AJO!Dtx8!GXcM+O zdITlG@j5mJkw#6@p*7TaDq85^z4 zO@Q=sBsk`eiC1|3YnhXz=H{m3v8 zPKLHgGi5D#lQ+9~I(nNN0%Ms96&@&E-v+I{VFA~`p}G7~hYJc&EuSvXQ`5UzofXTj ztY|S=!fjXpOIlWMp%Z1nw}kdXcO0RXVU zLFwvgy10hud@=;7;lZgJqfK?IFGal914rZ4ni3yn*zzp&6<7hHZ7)2~cmatx3kXrj zKUA;Eb~B!N=WWZk`0HIeSirA4x}l1lJFL~A(|831*^>2sX{q7%mtg?_i$xyKvWx~@RZUNoZX-h^BxT!+8?UN*_`cJ<;&R>6k8uRh6%Lw2P}sh95kaq_(rC6^ ze95k)UXuckaPLq%SrrsY`yVCHkf8IuP?Z#h^LXOPTwbfCli;J90k!kByRQskjLM&aX+|!jkV$4c2$kVf*)aJ%~W6?YE#Z zNtcI!Rx4W}f9haCK0$#7$D;-MFFwBMVjFg3{Y>{?UH=oZS9_%2nPcg8wXIgmczdxA zN4ip}*1YE2)zh769nA=^RY&IPTQ0X#dPePL1SESlqWSXQt2l{)81OK82G+rOKh^g6 z_g@g2)3F!npvB{C!CAKIU%-Km!&j8NUA!_E=xr$ZX0VOPGc}cg@@kJ8qADCyPgwuk zGTT9NL}k5}?&GZE+iEqfEW(A4qqX_mx14Oj;f_OLIG`gr-fmF7UO^*VeLBWiU&lHg zS$@wmFkN2TI}Tg9`bS>Xygng;2J|Ii-8!XWpt8h!G;352_euC^Yw>nm-EWHQt?ljy zjD|=QeLl1Ibp!xI|2ji>$cFzjmE1A9^H&k)fUZ(T8vdMEiE*$d!3}t`xi6`r@N@o7f5rn)Uuxz70(o)kz_w#e3SPOtKicSs+XICb z9Wf9+G(hd5n|y!P9TM!}fkA?ef@NR;&}8p@!bF+V2F|sh2TV*9j@P?smD%$R$>-Pu zO^vXc=qnKHd5R0D$@8`Cbr|ngch|G3BZbmXUmMxgFVEH5#}g9bMzrL}c@ay-A?A+g zh$%_aztQjP33>%@*QkZt{dL2ddINXFb@)j+0cc(Y_UBHXvF$s*%rvqU+Pk`7^k1YB zJx8s;A3N@>ar)xlTIWl{;j!({Y-^MIr@8A}B|g7(hY^jW)ap=lie`G|mTC?Q8FOt_ zdD`Y$V=tJemry_XEisq9Jf)3PgZuv2)}zW+UJ{`sRm1|4fbO;6vv@C&awdtC9r%){~8w@t9SSJxke%+;j~ZfO~rV`^%= zR*U5*poK32{Vm1-XPW&Vp9*@>kb?YN1Xhnjtngr#3i5$ONf^A9eoX6{N?*zsVzfa3 zH8OLuJ%cfo;srI`=H_IEc!qn3K1T3D;USmUGlLLE(EF?)6=?jCT#YvWIP1Ybmo}I6 z@*QXIOD#)1lO7hld_#c2%RNVglj?2o^)Nm$zk4U*@5TtgkFPUT6ys=iV!_-0B(E?3 z(Y#T?EOXa`Z&61u!>qbzM^*eNJ@?lR^4zzy$urh-ZMYn@@m`Y4t$ivPQKN9-{}@m} zchP<<=(;jlx846F7GqQ#0ehX7%MZuCq2?;8B=ApFd5*j0fGC(OgNr$sH&Wp#2ifND z73mMmqcw6Yx+O$6ga6-${ZaQr9+2xdu zxz%8{ewOVKtW`v&UCAt`!MQ3AjzrL8!Ti*nhatT2;M2=xx6W?0cquMw$PPwO&)E3c z2$8qF9#X^5JbDs0eQY(>R8c6&p$2N7XK>U8NZkP+M9G0LX;%-;D2r-iYv+^e&K5z) zf(N21PD}aUp$mj)Auma@3?2eu-WcL6$;^X^8Q8o(R!Z~l*qDF_Rc=tpQ{urjXBhEb zc#e?ryof?IJC^bjwTg9jpk#NZojx;s8`I+UO*g77Z88VcxviJ6JukAKW4stPZcXn{Yrmj zT4!~-PB@U?>*MEoTf(Ek$Jw-s94acnqXcMfz`110NFkFhqB&Jt^w$SX&D7CUu05?{ zKy)7m^M9W9%u1GEy#6sj7H2H{8r?MhZ)5V0mQ4T)vo5(93F&0JH?)<;ma?F9e+I*D zIaCM6v9ub5=B+C);6%!$1g;O^pMCxL-QxINw12AJ-zTrO;U`ps1I_{Y+8R&p>8}#R ztX@qPlu;>s2J)_e@B#a`kRC3hwH*l-8_ufrV1E0?Ub}%6i%6q>IX*s47W*3lm@&+c zG(nER`O={ugGe(0v;wdpLto$a@zfExPjQ~*^do7aP9$E+uU8id^6jP1x2FIv{iEHz z2fz*_xctA^A?$?=?G;{^rux=O!@$%qUaA2X2|j1}S8JV6-)y=i9lQBB`bARmSDHA$ zEfN5#0ppxqv~cmP3?7g0q%)y^1=wN~UmF&I1ggc`0&nsen8-_$g!9vtDerCo;I#SL^Xyr~3z9Zv#hZ?>;RJ6Yf zawCQ^{u+(ly%QxuFZ6i&S>x^5v+_kom%W(w@4Yk!Q2x}ch3`Ic^ zs-h0MV~V%Kt7mcM;4+~hhl^-!g6?Q*2Q%WTrp<+wHOnb|eSI*Hs&j|ZKl9&{WxS7i zjBIlrM=?4=8@a=QtO7_2euir0PJn;bRAfVspYcIuqs4ie@Y(3Ce>z=3< zuDU-~7|-RjI)O@*p)l~>7{~qBIdW$M#kZOW>5KB=;dtSb^`t*Wl&aJ=tFs@69-g9z zOh(lZdu5Zm!${w~X5hbSj{BCFgU_>hjG$9=Z$Sq9DzxSC%2GMfffP_?v#z|@a#)6N z*4xI`0Uw$VR{yrGK#3|bIE(349Ra%=#S4iN?a?8|r_6wYvhwze|G_;I5Qp_O=QAJTz6EG7mzN>H~L3Sa5`|3> zOXAHpS`G|DkdW4LIq+AiHJ#B@e=22C(X*YqGWB=1z4mCaLjhbTklu+BoTeb5k=h=!!_YhHzM%d+UlZ0c>BwwuPi5METvKw*;FB{pGLwWaW=QFz^ zVvJ9}mBFR6@`&xVxKB{UDPfU6oYrhX%NHGv#Ojb1BM?BX4hGU86a3BAH`9w+UgFie zYkdjH(~Rxv>dExt-XTmsAy&Jb-Q0aJ2cp6O!Cm5?VdkC^ppmB(09SdmKo zDYv8SUW-D0SAHcfi_`V0WWlHIf7jAa>OB<@anqQVo0DzuI9_VIQ6osghuG25*~{X# z<1?=MNcoLMajm0LItf*>7hFtV-#fm~Fc8I1h;>bMPgaczGmiZmG4g_jf+eG@$ts89 zyRD?S9|dW5P`-%^!1tu3&C#@Ax&c=L%F^hfH~Mj%^qz_qbzFac2{d+wZRf4VRrcm= z_AlVG#04o-Yqgk*{W*Z9TyR>T6)r}H zbU^h7D%zhepDH_3As=*q=y><98XtKh3OTJPs2E8kvk^fRHQkzqkL`&R3OBZ*oVRY< z(7mtT6MPo0K9!f-JWitJA4FK$B!TGn0xd50 zfAIC!*Yzw6VcrHefQ=^@a$2}cX0dMhTyer$osK#?+jIj0#mU*|kidt0YE5AaLfZ{L zOy(7OP5q1Ma`brd@Ie;UF^oUNZ6oM{<2_8*TPkIj+KE#xm26i_AZ8KqyTXBpeGlwe zBApbkhx&tng%AYlXh|II?bR+^^j5cf>nln+I>JUYqV`ZWTyz4Q(15HrvkP+>A10`P z++!27flWaEbk{N21%tB7ohcX1_>Fi0CZP8iWU}4)1JKz zj$w;b+?3e7mn2;mn$8#mB)&iETqcVxqk)F7Gg?Cla$Cj)_oweWvs^@{C*8m{cuajw z^DR~pkYgQ#lQ2#-)c`0!xj7z$Ks7gZcFoSNK5R9`Z>(dL0hD>2(im{uXZkZCAx%A2 z3``V4G1R}MXZ+-`1GeW~`1^uxn0gp7z#Qja9Y?v#&70vpQi69g4^caYCcyZVz~y|C zXp9Ne$Gf%uNKG#HzXYI-p`fM0L3RA1YKNjj0t6=D9`$|d6hi(xqiB-gHXH?X&JO@1 z`=#&|uM)&FVfz4e$8X)xh3H3Q+blgy_iywRdhwHahAmbMB&^f9Td zdDuvl-I@XV*5R8g#Z%JM&=4}cHFyN1-jj*#lb)YzYV1dz#hw5|u61?~*b3_G z)t@6@U=KE;L6>c>vYa1IBhU#x;cXe;dc%v0_3F3%{0&%&L<2ie0QLp#L<&EPIALO9 z#@cLqLBIltU=BI~f7NhfmrFimQqRX$Xvg2f@n%s&j}&i@yK_l1RA-{x_F^c&m48+c7XnCP3HCpFu9&<-8xm@leX@|L|rt1Fd&vzx6QJ2+O8jsU%P6))n zfWl&~3~`wu5x0H>A1Qu)zq=6P<5b*S?$}uXZPb6O7T0Zlrl-q{!|;R}?U(Xu)>|aw zmcwD;3Ce4w2FtREzsz*`&h5o5Q_BMe?-el|4OXnD8IjR}uv+yCCQ^Jutlggy!foz^o* z0rWGM$31(CRt>rg0mo>4^xrd_jr+|#jR62W*5gw<*_=LQH&0cd8Pu(Mi*Sn&w(%^wuj4v3#X3RG12J zTwaBIp$U?r*|g}%RN@W=MCE37qFH?yapdBSlQaCrGiFg7zA+mt#LKfe|hD|B1I1tG`C4}qAm@}K3pG}GQ! zlv^GZ$mO;Of$`D6dtMYR*gK6BXHAs@)8l?$Nn8XT_#sHA^|BZ)h;k7ukx=0iW$L)nNZ zDEMA*cYrH1x)kYpyUX+<&qNWK8g5v70dEMxs1}5oAh3$QF*dKBoNOt!mU4ok;4sI6N}N?)-p?N|qHlVqN4t&{ySgL) za^al^+JeWw8$NjNLQ}HTfK)CKpygdO;o5%FW9hwrd&CUwpGppTl$v+%Ojma!KnhQH zu9f{D>jz3q=z;rX=h5MdfDdBA;zReFfmz7_ZKv9Z*ja%P;gr6Z0Zd*qD`mYI4`y^sTOy82S;}T z9RwHkXq^7Z>p|@P;Rp(>DtOM+fO4Kvj|RkEgv%K}*lAT)J~id~i)}j#sgyf%__uF~ zK}f1o+w7K_ZXL_RI7>WEKBDBl(o~)0irdJD1X!4{$1{fyT;7{J9TXY578U_w7g{G8 zPuh5_X`Bsc_|c%RGzdcJ%C&dYnT`Fu3G6t0s3g3hUq>b0WWVVww;~1&A|93)s&8%Z z<5``abyN80v4q3HEQz2;HOvzUUr;B28Ml1EtRZG&zKHJW?$936{(xwqxwze2l_G#P z;o;5UP^>EYH$w|$s3VrE)5L684i>xOm5+3~pLDuN{O3^mdQ1iUuav6fiY3}O$pu2v ziO_RlrzqltbCS&E#T-F3t8bV@&+Q7Ggef4>?)H3J3^$VYg6w>lM?;!^D%uRAf8*Yn zE{r3k9LhMtp(uZl?K3t0;)jqKl;LS^4O&nN@K$$J;!c1SIE`pT*w3j zob>UZIeUv0d!Y$GrnhZ;RR)ToayDm92Cxu&eqRf{Q|s8cWcP1Ipw|>wYFzC@a7^~} z7}5Lfryl53Ebzx((q5B1MpxoV8@B)ZRXOc%^E0q@d9_GSEmcIvUS?3oTsfl>?Q78d z|0rE~)|x25Y5_=FgfmtAZFB2oErSMBG5|UNe+Er!lK`7b`@~oI{Y7W^D-t;Ufhc}T zw}_;xz+895XXPDqKCMh(Ux+eLfIL{8#e6-V$r}9_crrZ-=N$jUPgH zbEO_G%h6|saA=I{p&~FGE{42pQQkR=#seX zF#xgaE1CTW1$bW-mul0J+C8^alH_On3d-16mF@c`4!u`E5p7#8!TDnc`^=i`4fEeO z7zJ;gy&c|2ugQ4;;7#cUojelEo^p3j;YY61*VPq;!c5%eSY!f6CebA*;lNy| zLDkcv#^;R#CMzP)^{?G7+&o%6%2g78;<7d$ZeQm7R*o8-wI3VN<76nB%R{7b)7#Ji znLN|L=B9zPiGT5ABP`pEiTx13%>mV72Agtv0C}%okQiJ&W?rxDQQ8blB=YlW=KO_< z)$R)ni5QH)HDaxbAK0`jE~9BJqCXh9ia`Dw{!~p`X!1k#)3&Epy@OVXGR~n z$`HFbc`7$D8rdMDtbtyAtHXJPWEZIjtc=>3Y~dKzZU_qY6?P%PfdZ;{^Bq_|(lLo9 z@Im@>l5qlX!T?EiZQGkPmq}YVN6E8jd?K6Y3Ih-qbEzJm(oA~5YNuz^CeUI6dPH3h zzB{et?aBlujoaL;oDKgEI$%}PqJ1@RhjgtRcLH||m**pE%hQ+g>92I%p373^XDd%% ztgikp*LU5Wpvd-SblOsHFET|Zy~J#)V{j=~f(X-S8o#TGOEbYzJh#yI-KwViU@ati zj+qxPyy3IFWV5ADO2p-+V18ry^s?^YNyvT$BuAjdjrLcB`9Wl=s%QRGbG=;j@XwP=Q9MHP+BeT) z;vxTFa-CrhueI)F)kh7Z-N~BwwNa(I%il_E)(2rBZ2+b+ZzOX9{ph^T9K!r%N66!H z%%V6p)r-GTf5%6_WuqAlsS4Ws?-&uG3WGQUq4`h+;E)W?M}1wcIzJqHr;$YPtlBa^ zFKZ&vs*=-(aulkxUm+WM0Q-fmZYvK|Yv5O3UJ998qq)>`c7pyJ3K9?y)Y*GQ4|g%e z|0wxO3Q3TJ37i zh*GxI{spnw?6Dj=;Os%}?ckoMo}RIH66N1+$78cV2R_a4Jul~z!>kjysSl|kd3Z$J z33nU4U}(LcZeQ$fnpw{EEWGG^dC}QBpt26~2&Gq1s6o(LESj+?Y<0sj?!ptlPlDoG zY2PE6D;<=s>x}*%r@r)U$G437~VZ(-ugZhuAbptL(@DeU`HaW0Nv2a%|e6kj=EJ- z#?HysNowBetD1suuj$rcpnG&U>-`5tKgaZ9tJK&{lZ#%Gw&bdQpZm*!yTHx;&a4zb904cVKUr*ZCv*hzx zCDrRfoe}&O?#{uP?=9a@Wv`wG7S`DxK^-o8hj6EQg$e?iTeoN$WX4snCH3Uu(P#~J zVk3u`oh0HtDIEE=89eZ9u$ZrHbKQ@SqKAui3v%&rP3A{Fct1S*RgX7Hy`dpnBS}HS zrB%K{ubHoj>Sd&u>bSUH1Csyk@aH!1%H=5@)jr$(Nqore$b!|4zyyF5C~=A}LE_b3 zvJ0O7yHFXPb5|98q%Qr7UHC`<0GLQB!>TG~@zN4vJ!=zy2dIH%w7%__^5!=-;-FgD ztM(DV($80)iqy&;fTdS~pqnP*UJC~g&me11T83qNsrLMEZ#Y~%a}R_~uT!y}fn{W` zT*~@5$3@^GlB%^XK(Q_c7Kh!<6>iHYwg=N%LpQNt=s<&2+;yknN}CS-UQDQQ(=#Rh zmYXbd_roh1dGY8EwxtFE%T`-E2gY!?dtYwslS01KeYyL6IkkRMR#Dcb)A%%@Nd?*y z6DIBMxp38!jnZG>OK&yle{vK{H^P7K0c!ae@8$fn(>Nhq^_c4#AF>J*$V3GAunL#u zJDy(oNk?TDum&-40d!=bZMNQB%$0c4b^77V=44(5cq~OqL0T^}wuHpQvYZ zkp1#Z;4G(b!3j2V-9Sx}wW~_ufXd?f6ck9>6wI09ou`fiSa*}rYXF85Apl<*bdSUG z$OS>^j$g+BvoN&Pa{WulwS8|2=F3R#;u5H9fTChYkyhy;i5 zKjq(s$*8_&)Piz9`xb1fYITjRo@2Hl#?zR^_>%)==KheJl@@O0u?pir!xcEB5CJKG z2KZ}5EG&8*zS6esvyHbSWk<)hklnV^Un96*j9K7(K5mHVAM_MIn;!2;VcL4Hh(K{D zQ7@IJKAEughL%s=D^fqf=V=}n+hcjWcE_!`uf#nwk!7d^HZgt^@nV*=WTM^grV(f9 zFWd||Nxdm3IZ(D4b?8Rz4qVj&a<+%l1o*3>KAA#8RbA}&Y{ieCqbUWgAk;Y;jb_Ehk z3bTg!!yg2SleP8DI;TCx$46&DF*ITbAo$#_W|}AHuH83w- zP#J~)zI3Mj6LjwMHyZqxwer+mxuh*R1qe8?*@#N{dYmxa)xV2lCF61*G9kwf>>WPF zI6Gsr$c_yCm(5b5N>fS;V@BQN7&t>{t4=x5DaUc-hIH%krfI`)kwGf^4~JFI$>=7|ev zFZ}zPct4u`Y0U%nUS>NR@1X}ePOClQy93>n=?F*jg}tPrJo5jl+SmB_bnJOax}BT8 z1={)9S$O~R!6Nv)6|4t`2xqep(Ce8#)b%qx-L3y+TPi&(pxrTpzl~0b0PfmdhFC0> zZ5PkMGLfrqtG#sxrCK9n?_c6*crAvs!yWzF zF?s{hwNiQ{^J__$8XOut-h68h{7`t_fbv(dp3SPKzFLc*v2kRaO}?DXw@*CEjy9H? zxWO0LNDc)nyP+XOzh~ATmOB@bygOfBF`xq)ffuWHebAS9XC~8dg#lYn;%PbYtefJd zLGZ=A_m%lWHIrZbB_2Wl_Brg(#`#|UQpND%Aab$}(H&UBkhsQt+W6gManqXYe_jBE z%KD?`Ol($YAwci4xOU`;?Acy-;X`~X4YZm#T6EPx@OW?;AVR?Em9;-};ci!y7~$m> zYm3ku&Q1rj+HBnVU_on!!G?pSwiTeApPSotOiU=q#0o+5mQ}X-B-tyahtx7?s7a_l z;n5ZDF>}{@#;i3<(mln;MTk~0A4QeQ3oOHXhyVv?Fcg+8PyM~_i5mS!>CIYuW-q)O zf#28BV`v~@RvKM9+|=6SX%d3yR#qsU+Q8QMxtChcohz*!?dRuMllHstlV|5!7FQCS zbdI2CG6SIoPRopXVstzUdAb3y3#8IY;CTki8;0zsvOYbK@2#!>N#l+g=L{Ws@L6PB zbKNVJTV@5`>uA5!?|HE5j~nUKh89Ggr_QZ7yIeis20RhhHO+Yvil6j1ot!@YlJTOxC$Dj0#l0+shxZ0xFOt4J|lDNy#+TwQwM zG7BAOCV}_L2(o*2m%)hlBb(Iw=0&ZJ!`|WYUu7gQhUB$W!F zd98Exw-1!m@R3G~%kkaV-*pz2%f2fLf-lA=`eR?H2ZxcG?}xZgB9K9_r0M7IwR^> z>|)*DTc*UE$gmc76c=ag6eJHt`G=!!8@o1v#bOe1t}slj2D8ek|A}9(uV_N_i%O43 zRlZ3WM#^okCC7IOrCY^Cb;aebO>@qZKOaWm&FQ2(YILKPPtDABuMEL}zvI`L&}0-} zKGK0m^s~7)zsJ}XQ(h|bZ9eh7Jmllc*+YAA%^T~fgYj`+lpLcx+*2n^BRr#63%j^R zsk(u7N?|HP3HO8fO@6ZzCX6sTCYgj^A1%flTkP+_`$s(d1VdxiIjV&_*A#)FLH?^{ zHqMwu*X^XUW)B8DR|Oi`+n2hF65SH}qm5VS#D!{?f2P_$m>hn;AlP01XwhW%a1v#? zkoVxeP-t#~M@UG-J-we46KhE1+)?@6LU z{b9mU|Nh=qkTRC0WteUuy;>4?;AAD4{@(d55>*higPCcN@F5P~sK}iV7J0^Q+B8h$ z?$3n847n@Dorz{7RANE@=+;}=dWL)9H&%gxnURUr?&s;NZ(9?8VkcVBV}?sLMZZf> zFMw6k{>7uZ?RjQoXpBwB{-pq3Oa7y^WSBV9XV_<%?7oYvVBQc( zPpjGen)qtDSM>cgr}6b`qtWZ*6Rd#c`rL|3&31BT;sw{j;-b7#bLDTTLmpDgsr~~q zesOyXl!8+2@x)IBRpk{`LWE3c4pvA0p$0lr8lIZC_1}K9E81p8LA5Owt<&bJTzq2K z#lwSV@jT^PjCKj>WGYA&7YGv*(nWW3+$$P?$Gg2bQqMR;AfJq=UPtgn?>Egzx&4t| zz%qikh(6{zW-3l&G^6BgWs;`ud~1nd>eiOAGNWnI@*Ga={LlRR&tGmE(sr-yrK|lz zLs|GVYUJS}QNVU1y4Ho|%_W&BX9zIRe?DgSSL4`=GI>9lB9;_{FE4q3PR2K4ke-P9 zZz(dtnK&B(0Ulu0GR*1X!3-hyW~Ts4tc>-$qX{Q#DyzzbB&}#cDa5ebCTT({PX6V+ zVriUn?N`?MCe%i!~XthU+iZmql3|v{`eqvg*7{725$u(SfEY&xgs2@M!d{bBy z{rmo=sb<`hociwBYLSU2qp1F)l#Gj2!P+3vA zhHLY2&S{YOLb09pOanVNrV!~}$s-(aYGAO~B79i@W3kq;X zk1Vfgn)73OX#K??@ZRXkZ%payihO)gNhKgY!oC=t^I}NePiMx!*Aha zsWt59&`ct#gz|4J-DrTzl!L;rHMz7!>%eI&Gd8 zzF{&<*NTedo#x16*P{5i9S#$DL#8hqwRIF%b7att)&ure3YHUe$({95)A;ULRVrX; zWMZ(kP4|^TVe!3Jy^HVk#Z)0Hln)OMT%Lg*M8-KfgOB_0&DOe;A71(i`Jya31$UuC zCz+z1;VkC&CnI(n_u`MsKx?3GDv|#G=c|48?gChixpBGoU_J;)~1O zZn#O{Sp_W%TX`MD$76Rzahl>;q$iq zgeSSN3b932EBDKu8-GZiLrQLr-rTMucM0Q?tyOZPrqNW z6n_56TA?$@H8eKSao6niqOju;5!#O}?kR}{&6C8QNqHY9Mk>>7eQ2SRBQ8qupivhLtDZ1)OeQf2 zkKaG^xd{{eD`U9vR*j>$TbxXF2Wk&RmH3Zqm7!2R?-ue+o4rwG@zph3LEiF)&a0M5 ze~h~tQd^@*r4!Ip%J{WhP-GLg2Uz)OE|GL{b{ks}iza;-Z3_=_JJAeGnZJ=9*A*@S z{>jDD1xuO1C@OG~W`FUy+)ofe%No_$D@V7`G~Qa6A#*={_9RT8?3=9ue+4LTGnz7g$Z0%_AqH>he8Ibf~Yx zuhrinnd-BRx^G7uCsg;Z!Biyr!K1&+(BUh|k*Jonp=@gRo6cWq<8bN1e^7jL zB<@xsQEUp8sKge0%qaTSc;4x1lYbPzx$OZKwfl{V7BR9y*H?>6CGogJfub9g z%pDw4npY~o6#M%tUCgTlmA$`@6<_DG3uneE&Bw^q6DJ)V1$xF5ANyN7ooe0@%$zaoanQ@Qk3|%=X!rrD^^4^bxQ({g z*Z6Nh35xv6948WUIP`1%c4(n5RubbCQN|?)EKZ(uv8*{7XHugBb~+Pnl(|MU7pE$8 zx^G!mCZ|+p(aKeE<7LNDQ^v9M_NBm2&W*^^9hGVWYbw#woUNTHB@Bt&KYcd0UO2ML zBOS$c-y+on90duzz<%&mag@f+&q3L@=G5nYI|a!=Kk)<**i?{I42s~Lqw z&92-rb9wrE(v!r|rs$LtOKy5hYG zOQp*GSV_esULFuA=#%wRUtsgoO&ygR9&u*HWp1+8*h=liUh`*BK^`iV+>Nd1G)UMZ z>I+Eg8Pg9Wu$ZJbhhu;vDB@P3RH)o%_IkZAI5*owsr~{#^vgGjw+}X3-A7?-{&(qM zy`7Eq*em-S8R)d@HvNrw;t!XEleM`Oc=K|tR>FPB8Ldyxis(Z#*k*i-Nc2@67aByH zgkfy-1B-#~Q%r1~-Ggz=L+!zGa0|s-8anuGEW|+5X|8Fm&GY z0Yw%Et-SjdI?GZ$T|Ye8uR_SeV!~|^Lb;r_=pmpyFEfubuQwIV$P?Coj>7T^OB(}* zsU==>CT%##``hNv;-}XM_OuHgyfvy`4-=)DN@3klZJ9fG_to~L(;H~spU}l=W-arqEfPm6n0@6r#D+nkh4bmXp z(k;?0B3%N~U5D;cx?P!3z z-r|{qNd{p|b@Y)*8+90A>r5~9Z)R~>1cX^yXu5^N(#m2M>N$2QkKDv`&DHUKN9x_S zywF^&9W%nQim*dXNh^0@L!ASiV=8MK!g8Wp@S~amo!VefsiezZ<2D_ISk?8#lQUVlYK|IIfd7V#-D?_&|x%Wbe|_(giiYDOy1kAivK&uML}v(*AQt`XLx%Ii5n1r@y@>>i*xk^3L_)v zu4_WOuf^aXDn+w_DO&uwp*D#^F2uN=jSnFEvjd3>cs7p<-$Or1GQQG%S0d|m1?ca- z>D)mrxl^bGeKLGVI#J0UFg)3MY0mz`i|qP(O9d8MUWKb7=HDeg{&quXW_NT-vQj4L z;dZ&nft6!0&1F+k4(I}NdF*a6ahuMdY|nS5a!F=1I9{Lo(25BKcgE4}>DtVv*w~kd z+Dj9Iw1svrjdNqV4R0=m6zOqY$o^F%KC3D2rTv*-NI6ytH5K{wOYB_lu(vZ!a z!wa`Ff-q}@#xx?eeJNJ>PQ^u5E62#dGE+fwd2+)p?s`fxtn0;fP+5`w5~j_{r3}al zG8_TnYF87Y4(ar8E3pg_C+%1b2P+GDdYCH1N=3x%)m_{1ujODZr-Q-Lk@dlGqJCM2 zo=V}BLXX7p?tMnaYq$pwda|Qa(q8*nmIk&d6HM zw;za1 zzq)#uM>&Rqn>h4me5bAhO2W)Znk$=shd1SfR7aY zG6O6e9>WA^=Oh9Ee@_hKyWFn~V65BX*Hs*gfl!&9*ttaq;r6XZayQ$PBcpon-U8`u zUT4&gHN<$unq3_X6UzOjCc2+MfUY;LQH-}C*Oa1F3QlpggJtG!0yIBkL7p+t0Xxt5 zf-MYcjiwVy$cKKVGeZOaDBu|_k64Yzw#dKbJcO0r#qyUX^YCpV#jf5+5AZ70&ykk`Uv2I zzXw~kqij?JSoYckHh^ANTvu-~z zf^3c}ydt@vT;pn&E%nHc3A_(48YKkWeZQ?SPap~|I>~0XI_XvNnlR?1WWr@Rlyz#5 z6k?!9`^<8W*uKa*r>amWGdy2CX5X%j_m;cjZSKi+j~xU95+Q_~F@j)ukMO*Lwuv!7 z=*XziKuSq7&{@j9_AS*tZy66gFdii)r@)2thYs^-{&Eh~Bjf?MWDB_-y>%SQROOzq zW`nKN4wshu#jKoj!m;!Yl#Kf}DDc77@yYsd=_~q$M%G=!z=UMV)+E1F(zFZ%Lz!n<8=~sB|neQai)((+^EgyL7xF_7Nk^rn^@@UPCSMvU;zOM&@tu!J^Y1 zT^P&7=Nz7IDGM6_icGVRxSuN@iTUlG9tmA)8j4MQ3o>wcpZXJ&N^ab6NLWpEpN;aB zJ<~DNVQ?T76KWlGA=B&C*u&;0VWehyX_ydSm3@klG{wU|*7oI=dHy@-1=hPhzJLv4 zyxN-)6&s~YI!0^$<`QDFzn-g*nk|6y^ac;ON$;23Ho}K`HL@)Yo(xJCS=IYqcgg#0 zwe!%$O4Fkf_dbN^8_->n=J-KR!&=VSrJ9cHuF-@0fzN0FBFOmYGd!l9U1%8!@czs# z8Pa<2UpJ0p2*no9wwI11%s-Eg9VdOEJianjy+Y7Ik=w#LogoVd+a8-tWUV6_!l)3+bf2Q4`UZx{F&>gvg7%NEkMgT zwwL>$Nq8dvnu%e(-0!G#GoNaGuIAw;A0Mo{xi68c6>(3xO*Ld*QRM}cgIu=L9B(k|!l81sC4etV4UXiT&0Fy~EJq<(VCV_lJHIt^!L@tjTXZwsoLkG- zdv@0|K^4yfj9Pn>kV{zyy$>JYB@o>zEO&y?=HmTtrn%Cr<#Z~q6606Am^tpQ@vd*< z<8zy=M^~Qc7NjFpTMMOTE<#9ZxAKVQGtsbG~+4SBrFT0~ht_uJuX? z=XdHPdM^Q=%f4F3TkUqY!dX^c+2T9YHq_Ze^3>501T4WCdQP?rKi{-d2`EFiz|p$Z zbeH$@fFhlAaqJCqHihMlAtxVhx%m4xI&GijV(5y!x*X>wYxOFp92q98+M%;o(Gg*x zq5GR(Zr4|)AIqL>K4ah~LIeN3Y1xzB`r6X6!1ng%E*o?AP0O&g5-v3&_SW#p<~YF< z%_Kct-A{3o&YF#$SQzpVN>eT_)r-YX@*Qe_A%1H9t}E^3RN8F^deU0_m36i%yk2YO z58qmJe_%T%;eA6DSiEc!5m}5XcE&`qpfLZ~D2o~}AGfY-ahDo)HA#AAsc8&@W5?}w z=DVZmBP}m0(yn%W(5})^G=xk$zRXxn{io5AXmKvx1ZOw++9@(Q*~i;pUV*^$i*-B0 z!+j9p278cf67zfDl6gYYm9AE|zzN(`e4|oBt*U3ttl0^oCsiRX4wmM8(aQ|$fufwP z^-t|TJ|qb_VcaG2WR$Hguac?ekVM};=OPvDlu-_sbZ~HN{alru9$-zQrvCmJfK}tF z;kO0dQTfkERp|Ql0qn+*eTv6R{rzenoO;@EudBO~C^|w$Iu!8hz5`O)q^D45pqzU! zNI}MKwc`0~8FBkkmH)VS^z`)jNIqA&7?c{Bc%!iPcJx?T274 zOMzKi21a6lEkuXVn}fk4azfG0!xEzl1?l9g$d6iyYq_SAYU^0ntpYO%dKbffIOExe zhjY`4w!(+xOgf72RUqGSLnCRM6Yb*IH30r*e%w=oDf$jHO+5xQaBrQ#Sdp*RRfSyG z@u5u{7%Lp*1vkY+;=Z8Slv1`y2?-t7o3d~0Zd7^n*zZ+6s}=UR)ukc<6BVfCR9V*T z1L>UiXE)~!yj0pvl_lauOy|pM(Q%avSd}PMYg>!c7>e3tJ{x< zYT306fFTC`6>lc@uN)!4+}2K$Y+MJ)8u1wYG z+cH6S=87?Q*UZ;9LjeYx{MibOIto;^YBMzrRlgjnoDZrn-T4suVtH%1wT<}NS?JMF z?$pF@*1Q3h{#@Qs4yE_s`O*NQic(WIOG5SK|tzy^zIc@5VTQ}Q)OStyrFp|;$&m)^BzlfkBP7+-)OPIZYQM8+Hs_JFbm+lloa#Z> z!2E{&EHWk%JaFIhZ4TojPT7nGLr z(!RZ_C?A?5JHzd>`8dzzemysBGT({fu*K61kU&|7t?${^GzFJtkGXF4pP-^Bk8_~s z-Emv$>7`vekz*tG+{}0UR7KDqNZlH0Qsw5boDm_|{hgnCTEH-^UnKu>)hMgmynJB? zBr6Nnj4oc`vWjRZn z{4VSKUC+!6s!*X+V$qc}FC{{uP2Z~!5v z!~0~=K&@Hr{v@_&t~qLw=eeL;_~ofWkl(fvr{+MUndn8^tDuBN@-(r^6%!=KcRtBv&> zs&yz?W|?Qa5D4>7z?@bG02A8VsYIURW)x+BGi^e2Z?_T99=JuO2{6}@)-(hAvoKEpr1Wat(l&J z)uY7$0^dVoIh(y00mjBDCn=z4O@#YkAsw+MWPD zL~V zK_F#x0AQg?X>VWN&g|({e@NDsT6p3kOQDq+vU1M3&wJ|sK_sv3YpBe2Sc!We z<$InoHB^iZxiB=s{HU^`Oi3~20cNl3u#+Z+7-1GRj&{yF(eB8Qzm(5;mcXS1JZZ94 zr5jcK0=ySE{|{P{Owp0eXT;d@#N_EP8HJNWzc|m1TPMwS;Ii|n-s(Q0$RjzoiVe)c z26%N#`=r_y?Hhn>iURPJtjcQaD>)zpkbiQpdOoy1GK&W@AgPz!=&9$};{N_7>zkjm z!Zca+j%_L|iW*<>$g6Wya<}d+Z!YVqQF{udeh$^#X(RG#ND4jzq!Fjld^V+v4YU}_ zjJGl56!dA;zS(hXPUIih2x6GE0{eG7f-5558miTqqNE!veH(bPJ1>Fk8G+-4)!jzH3b_+0V(sVIypPm+dZD%9ixB=k6<05l)#({ z7*9d8h`14MR!5uUn_tCM^W-DQrHJo@E(!3CCrP!mOcR{zf2S1K{J0}dLGJ}4X?P1` zoddWDD7`^!j=u?qryNdHM}T^Tr)DFzQC8OFIt7d}VT`RIF+R;sRQ1}H6Gx*s)7+Bi zLi#heQ>THwxm53-5uy}U5&2udp3o$phfb6qD>_+N$jDGiBlaT*xM5IEqgx$tSIk~0 z0ba$?LK`J@yzqFCA9lZag}tHhaT9${1F~yct0Gq&+K+Zn`r6|POL_2Pas%ctIn9UnMZljRNXIiNnEyGP2xvS7@o1Ok4*&@Mmr z@z9e?0ieqh67`_TdA%pkUhhT3;kMLV^zvL)7d_1|jyzu0DwY6?rkKykPel_=E7 zSjM}wRK6zZtk4M;Rz2)ZIpi~H;w;q+S62fElD}M1ntQ$VmNYyJKCQ2D^oJ`2#R@oVlAS)GsfO8pEV@Y3M@IJ2;5`iX1P9 z3K*s(c*PG7fBdB~A-6MttL0{GD^XGTi7kZ@$vBj&8>;kFEPHiP^8?pyV~Rgfss10y zKRE`bNGQt*sZZP+%t-nXwI4z3_h#U%W%0uzMG;$jGN?Ak9K?Wk6{BDUpeCzT`#bz^ zBS;kwC1jUO57}X*f~P`-Ju7j3JG{==0O#)R0>u;)otJCF+__5rq-~uBK#zWX63UsB zycunB2gE0q_XH6k5Uj5HV6?UV=5;+Ez)>0Uh2JR>sIU|iSlI6kmyUXjcJa~&>lr%z zN5w>+8WjyEM79u^OJZ5dy3Z>+J238|lS_=`O7UtC3#0K!&O4-%d}b5BB_oAyW{a?; zq3H2;ubJ#QKE?^Ygxu9bh*3Q&+@5rz(vQB+zZDPm(lP*~ls6;JzS(2jbOeEbx(84` zS@yki>2=*YdAqB85%~*S)amFgo9~&QAYwQK(iuh^>~~~DpL%v^;}cX$&ILvzVJ_{; zE(^Ta)k-%b;GW<1U8u?e@(6PAkno5^8MiGXc(p8TpE08F`;2_aF^D#ENHOfOIgo$KB-Bk; z*4L^Q=o*7yQ^qDH$nbEql*2bO2z^!jff_!Fo*v^Tzf!o~FgO&0hiZf<3=T3sL97J% z1Q@RXw82nFhO;VEOnXVfo> zXZjAL6Q=QQYNW!84`-BM(Ga;%KTw z3L8tH5eL$|;3)_P@wq$}6B~6*+UE}dY?*`AI#xySb*E&Vi&&n4K_LO4lKQ^NEU0E= zzBwGU91I??<8v?JxYz5-MtXogh|;rtTjDeR2#~Sk6B4F-hp>yIZbU5lT(Rx$^t-`;>ynUY~spN5nEhz|d^h_x5D z4%F2GRvF@lpr9@gU#Q2(+Yso!1y1bP^4^qQt0B!hln#*2*$6P{g9uX zzkd=7NcI0# zVd-);$E6|Y&9At+bvrSf1cao7nD9aVMOe9Roaqh~ca@oevB&VE|ei->RlfSGhSf>HNkgQ<)u(?a?}ROIU_?ry~Gcsq?>$GvFm)fl+oQKhFzo( zkvgsvoHC|tuwp9rzV!ZWX;0uM%&5s=HF79@t?5d7P4RKz>={;`<2c5IYB6aP_z(qQev>O==5k`X8I5>We4Ilz* z6@hgNXXewXX#DUSxk0w$96qQ0Gf+UXy-}fs?oI`OC@gGE6Y5HmPAcpTxU&H1&*xJi zZS)YTYS(4lajU7%Tin(Lv#W(>60aUKZ`k1tRb?}GMi3(at~_7gS-?mFC4V9KaQ>$L zaq_ga5<=0k#FLIVRm2R%yGY0x8CdmS@^CA%UMk9E`ZvOB-vT(#hK$ z{J*z0I;gyooVu#K^^YSshI-Ir;zIa?YLKMRmu1uO->55Fzo;p5wQNTRO$)7OTU^WX z_Nb6&LW^`uSSj?PRlsvBN1d&uEBCCfczObE`|FZzED!_7k!~-J6F1a3wtTw?LH88K z1at$8@`JQM|LAZV3{WjJHYh=#Y|lQd+MDajEr&bT?dL#(&r|F}BNluNFqN+``UA+RQir87(X=_75iJ_4d5FSlznC zCVx+1^NOGKlayTM%YBQOP2hZ~CE><+ZchNDzc3E=Sp{aawY}hwPaE9FFX}xRu)&ZK z5roBbI}ka3xFEvr8{%KWF*@Y=?=T^$K3!7^u{Hm!^_hLc$Zy|>33ls|NU+!r_c;{v zu9+EBRe~BG)pBt$oHPR7_KwCvA`B9u0I+w<+G5zdb0H$>b4Ir8Oh%w3{4=AcKQr>Qo3S*&Uk<`FnJl?d<&!Qw<3of^G@CF#2|!TzqG zBMOu4@JkZaYkhwTcWJy8a(>ltiy+kBn9ulC{R=b+K0g2WVWs=a&C;Er{6IIvBSBsF zBmWi~3F5oZKMeTKjf~e8ij&hZm>3eRc=x#LX}B>Fomzl1MwBoYKR*<0B)j?Iyk zl?nGw>A=%t)$KJDOi}_jlr9P){(M2GPl5w0;FTiY*#X43fdv+kQ#akBDN4bxvJB`y zT+}rV*=tfmrEE)=*N{}>$?DbkaliuuWo9-uQFhkB>ZD5xGT@7URC)mHWk+RS`_tJY zut+}`6h+|(rwP(ZwZ<4MH27u>|p^;m_t-*X|hfkDG24}3Wa6w#ZMV<(b z6wftgI0)zo@OSj^H@9%xoO#WdAWVdS5L`)vf?L7Lz>&b3nuXjA_H^ZL@3olej=}AP zz4GvB=D|_11_m)9!o!E;jaN&Quv{OStjF5&9lG#t!cGZCnfn0I@ z0F(JMetyK&)jUc*L$;F)Hivkf1f7~%7F+MNKIu_JFF&nX*M}Pf@qCpYDGC1H7c|y~ z(_TQDe~1LqV&)d7zZia}HNDhvvTtJ^?&WzSpM9i?W8AOPI`4M+5CS}!zoX@QbBpOb zL%W%MVa>F)9Qywqa2V0=KA;81T`fwH1`u+i&l}A42K{HS#RXN1Go(zJTM6h`dS}y7DHI z$KzDL)C1NK*-Dlp%iZ4>t-Ge(@ssxqSH^)JK8g}^@j8!E3v@jnSIH0ZfF4(r<0Q32&UdL87Co-vPIkdzuWh612;4My=2b1{6*7h2(F}S%;y*Ur|Z}SE30q!7uR~gBp55~+yvbUfLkCFl}rMaAZKKY*J}g$ zqZkD~0b#nv8W`(21@(`f2@4nm!Sv`+hU$cDuwBD28LaCDo5UxgQ8Q>l+v8DD{|SaY zJABi1sn+tr=0tt~(9VNJa|IVGPS2@?+racMP#p+$i9(=Hs{9uJwDfk zLaDP+JhJsGm20-90gdrRZza?Lgg3x@!w2{;`p&x;4)Wy4F&ol4FCKCNKvSkh4tyOz zEnkAdw^>BZ`K&tVFY^JR?03hcZP!H~12?1XbS`i{SGyqhi2FXP&0!vrqR7?$%!5*uTctV7PR)srRw(JGk zgdat!q2gt`;tw}*Q#Wuu5Et)IJ&2KfUX6_n6hmg3vSo`sK&Kbd(A!h}gtfTmD)9w+MY8z>HFA#M@rTjvJ}0eR2}D~|N4wZ;)ktsdRAnJU+RDt=&k0AVFNON=?PNLF-7U4 zAJ7tVm?4uANWx?S#-3XDY59UkI8OLTMmqWYy`fV8{kt03KjT!Ez~iJR&+cy>hpfl+K1Q1UA*t3dEF-_(xG!+!L+~Qb8ON9NWsh87;VZ?o~*Ieo7 z5yGS8o6H_vOG7`p;s|di1}A`iFKLTC=d%3mJ)`;fJ%^>{$~e}KLsceo4qQ}HJ3Bi) zv+fO;)L^BbT2lUX0vrsIlix7_S`;XDcCc>B&66ga-@EI&5zKf^h@ub2PjWUYiiVKB zxhIQ5SXf!T)51|qVb0}*DpK-GB7?lp>gB!aUZo;%>!2Ydv>fjN239Og0g1zEAsArC zAU9@Y?5$L-$Vx~XeVI{f`G_nrG4VhK9$e)Kb!-nPNM^VfnEzCp;re?@r37n9mp-?K zM~3$G_rKI8#q$SCoELO>cv#^dI>qu=ZZ0pryFxqzn$7}wh8sd2P1rm*i1aG$lQkpA z3G!HLG@r&xKLs*K7SX1+eK4U$iaZZ{WWo>>?8?f%o+t`tx%b<*wWL5uEvFXi*N^pe zFLcbBOS3))Y3b5REoP3&l3}}nU^JvL%Qcg~{Hzb`~swQ1tX>D!h_Qe zb0!%ME4-c`aX7@E%6~m}-Bv#E%8+8K1v(z!#|$_`j#Y-9_aVf9e^yLs^fHI#niwO? zwhlvs1c`SZByP5~gjh{z(~D%EiMTLR1vt3-}6T zCpLwHPG#XAYi~H9E>$&ntK3J7M>XB-_#ghtB`WbpOTiFOF`VU>MugOwLSag8iaNQ< z6I~ZFVuW0`|Edzo%V%n5z;=ON8%2s}`+x+3Xn+-F=^0iVvbQ%oYEXQ?L4ki|@RT|@ zD@!_?8XqoNm=%>iY5Qwnc@}^1L2v)nnDf*}FzOx9f&F|8_zoNY5y`X7Mf}(HO z4Y&`mm&#{WOH~1j|1HKC*Zq^2eY(tAo%{{P`QqZtEXvsg@TW8O!0z2&Un%jg5F@+HDCt+xX;aPs!FK30Cg^zUf{TGY7#`U(ZiV3!%AlP|_e#?p3kA*EZ#i!@U-5 zpxLAQ2jrBtlQQ34A0~@INNDSPP43Ne!lCkS)zTBf@8S6;f2<5cylRhqYP{2?c0NMv ztl?QKz<~#z6Yv*<-OKSmEQ7W|Wj*xFbOE-j#MCuI1jmzzoxPUw_t9nkK01QA`PQcw zB$v%%IlLB0o;$?-G#y(t#(y7dpYIp~=zdV8aEhojl6QxXdP_uKpZ4y=Iv zHfD9PkHND#=}_rEv&!?ohf{F!t4rjcN9mh=5oUT$v;LofNB@0@WNFK7e`Z0>fUp`i zf-D_=l)E21|GO{bzxP!RRzUB5UDtlC#ChBZ$WnLy?>ad5*Gb1D*qcJlulL}zitS!; zUK58r?GzmU=T~yd{#HdWiFStZg)i)V&>vJzuRw&V9Uk8axO|@v|7&=e7=Qi*{)wJ_ z3^p|{1$>9D_WmQul>}>=IrAaHJ+8Ci|E~8>w3qEbF1T%aVt`UkYLch)B^YZOlW#l! zcc?wSw+Z=pElJtt-mSGS3{j>QM|pOX1&618X`gKGiPYu)zA%B^9ns&i<6Lft%Rs~h z-2isIHSOpV@0~{K_E!k>^4^vMyZah?m89$45put3hFJ;}?vzw2>P!+4Y?c0dxrY9J zX^rc*WH1HP|4jQ^MyZU4qMgOYB+&~-%j3y=m1v4Q$$PuVnGwVZVcF;=5Hrj)w=l5M z4eZ--Yn@0-`R{9SUaS2kf$6X%WETa|ojQ^rSrfU1S(atGI3MK+pLPfh>vY|F?wnQ| zG1wC+#}S*7&OK>_Z~CEDxMRd=+ogX7b=~|YO$-0m`qKVDV&)sdUqtMhTXL7AFnAtr zAQ+I-Xu>G6F_b0#nk=(pm)O$CVE&-v@=#|dsh9nBTkz<;2N^?$xn7Fl#G^L{aahEV zNTs~eu&5vc&%H?$QWZzn?Z0n!^u33s7|!1hRgy;G3lXyo@8s%yMXuuP&`BHHdAWsu zu87O@A1?s-W_h}#b6WP!m8dYuG^oD8k^uJtmnkHRCYl_oU&hXHs9AjD~t1@T{hi!4@kv@y+LI*fzvb) z%;p&M?!QEjSym8fNz+rS)qeu;LVZ|sjm6^iM}EiGECt(7!7|A2ECE+6I*Yj6$8cL6 z8=nW2DU$a0BloVp6?mLlWj5dd6A8`f{?*&v37YX1#eZ5xuaOCbw*cm0kYu&%0NME7 zk``yDacnTR8{Ombnt~PbOyW6x>~by^QEK28H%>9bu`Cbi>#px$>H?WRwGrq)ihPYl z1i38|{AbLa#Q5ki^*!d7paDO6EmcsiV~mI#WS5MfX+|c#X&%)0KpR8{KwCjr4+m-2 ze-j^`i&k$A3nd%uIJi3IiXINO4ZSh1OO~Tx;o5(m63nnG4wLfB;Q(U=KTrwkoOVBD zlJ&PGSjQ~2)v&XETo(cJc`q6JUpKFOf4gh?i$vH?cIDvB-KV#kZ!V(V8m@Dju_1)J zluJq$gc#;ez&iY4okDZlbd#kQG#2V-B-Ej#2q2P6tlHB(f@&FV!kWX~Qr*K+xviD` zL&qLTz~sr~Gg;VK+EYo~mPZDkT{BZF7}S*=>v$)aU3ff}}6>3CJn}~bx?mCe|bU{uGI6sp1g(K9h8?=8l&@wb$|ka_5?ueFBLFrVOM&2zk{qNJA8h~14+C)n$-Ia@;p z2LWPVafg@qm6ipCjI5T!JQYW$!^RWcj9_?3OvoJ-0&9?t)3ga;&JYJ@!nXK^9huaX;_3$0t%w{hn~|$CYbsi~35soSr2~ z_i{iZyI0ga&k$B;2DK~i_cgtP>iGi*kb=^)GJ5~3fljBZpH`e5YWADh!|gmB2pB@J zUt60!F!jEYG=02B2arWR=t<`vTS~nTYW81kTFiPLe6!f;`V0_&QczU`h?-??guFVB z1MKQ~@XY4?#2W|5tB-U<7RtrKc3|##-OoU60?PkheR{Jr0IY@5kiy4~!2VxiSz@0x zYp^lni813*_2aQ!J`#L1RQS{A%%n;UdUTj?zZ3E#s()7{F&^clb9p!UbBqA;1@wJ_ zUej|&M)hYE27BZM+IK|tf`a=KmwO==!oczY7uG`nQ7<>oH<<~@p5I6o+!+wHwvvf3 z_QQrw{GYCeJw;64t};PREEcc9zXyTla-=1op&QQ8jS2?4X2taF07&f>>AH7LrTx{qvH-bo$J}YqXCfF zeoB~|8xNMQj#=S_;7q?}zAf(GsXIFuNb&jE+pQvm2P(&&bM-i}#g7bTyO~4a?K>XV z*rAmk1f#hl9`C||m9C=Ux-KOtJqW4>)x3Ivnc!_2P#aqZ(r)PIW&2@jNg@AlU9N4B zn6sE05TM!Nd4%V5$tk7Cfw9r`WE(IqfNLJ>g#a@C6|Ndsnk?__^8dFo7ekH>|H(J| z4XiZgxGM-xDJlv8>-_ey0`tE?jZ!8OdGo6(E)(Omz%y$E3U;}4s8)(*I7o-uH`-~` zCW->aO($SR&HHLt0X^XsH7%D8NlLCI`FON@_bYO#bnraI-gl3y6soC#|IhWi7>Asm zrKRXC8^^o=P~Y{fGL{ku3h7ES$hO_se4^Qa0Xn+|bbI$(sQ6ZY;l0+Kx^hm_CV73n zxw#F`jK)3F=tf3!{csc^EW00pR5vnzryn6>z+c=jzN9Aw`8lAN#VjBbdNp7!}R6BzxFyij|FPTSho zPg1e1T>dxAqtCS_!6V+}!Y?mMmJm}(=yc>(NYxv>pgfsvCqRvjOsx$^3@Ov;}R(Pm8(;;1hl;?cM_S+Roy5&{8dG=p1H(gy`lqp0S1XT1ZE$^n&~iF`eq;^R zQ%54-G}$K=(4O9?g=8YQ^M!G$4=Wo^-5`FGw{#TbBw%Rs8M?dn!Y8)DPW4C7>xv3A z163YtB1kbODW%4Mb;|0iSUS5YxBeF4D?q>bedK@wbqUKMdV%q%m z#dGC}x01pgDI>hSd*e+*#xEX|`=Yn9CEAUsM@PO=awK%YSslQy*PNtP?nT|1{NwWZ z`UdGxI98g0fnhB%v#E6bDs{-Dvi@v?LMZ7ijviBs}vjh%VcxLg>mIda=EWxYyk&QN9;W-q6GF%m~IL5z+3 z$Wv(U{K9#pL5jw`p~3mCyjTr%um#a%qfhZ6eVPx?sv(WXN?cYn$R*?`$!Qa z4j%ZGCKw3u{V2m=O*&N--PYky{8e(|o2}F_-!8J%!1if!sp_vPez1IEiXC`(`+dJM zuA|~-((4c{lpWFOC6Xf}?3g77$D(Ph?=}x!bbzS}DWoaKl z8#v10n?-kX<0nn>MBIq~eO>NJS?W;GX^w1eZeBiNnWx1(@jm9mh_VW1r6D2&=r3np z4IVh0YE`-L+mSY+Mdz7CS)6?4a&%lxHPpf>Vd%FIEXo0lm83n(jSHol*2sTYU>%w8yvRTE&5M3(C-+%4I! z+I(3Zh0;xpyHKr!jf=Cqw5?>FT5inl>f$^q$u>2u?2#PhG__q_?|wAxn=jS(lf_MN zrB^WeEmgm4_>=yJC9a~%l^eGnt}Hjh&CMAsJ+8`&wp-kJ zRZ?o|>}OIV-WEer2f^por`4k5AN}nFQbDi^5~JfLDQ?#<3kvd(i^RlzOKGTMQdO?| zPBH?zJ;g$wQDcL^_OgAQ5>zc+oFat5uhgUIrel&EzgDDkE zC@3`ZE4s9?MRN~NS_F%pM~4X+IpqA~FF7Tpb+zgbbPOs*dXo#D_NZuWeI0%y7Z=-r zg}iB8ENx^1jEk?8UhNgJN@NF~R@VW}C7ovW;)XVZGdaq1E&h*(vgPMpIy^DX=`B3; z^!5T5!->)j4wzb$l#;SCYZ1y@fpR%~TQ;rh7`P3oFZdnhi-vSFRYscRie0as?#>5r z&)FU4Q-A zxR~YY&7+W?zxoW$zrcZdtQ`kykn$FHxyM8k58vd`*>EwVcA7ew-z)Ff0RH&jyo@?E z2!Mia3LbIk&*XmUtoAG)2JV!JZRr%pZA@|I@Rn1r$0_|JH>^b@Or483F^w7JyFom$^>C!{7a6PfX6HF-6A#Vc`V zYh|$|q}+IMZCPu%wxNNf+<3O<$huL^YR**K`N%^tv9c0p;_0^3==i$pGScB0C6%C$ z%aL&Y*m#EdbM8-^ zQ;!PhjO-PlEG;KK_Bkc zia2H%SUlfq3g@ZGLzjq`7QJuQephS#;dp0wFs&$Y@q#l5aoX{N*%pgMbnI!o2>7s= zUH7<|^U=?y>*Pp_1N_QYE%@`#jC}3C`$Ub5FGv334lOKSI{aylxV*gbt98$Fb@Bj<_8W4oedxk?(DKjjekwC_ z4yX}SNMl;*=F4%XEoC5*dDjwIEJo@VU-A5o`L4vFHh-I|J}l|!gsqH`jmc6PgirEq z27J~R_)DbjLFxLVR>5Y3KY}6 zgZKXpCuLn{5?WDhF1&kRn-v=**%s zX^JhL)4bzR$5V-lXa5ebU>wCSUE2?bk{B7FIbLxY3&C_+f*vCYXs^iOTo8z>8S09AjBLVw=UA&Z{Aikzm5 z?%)mbMQLji&kmI^+&%_ZIn;=zeZGEKS#V4Q^^>%gmY1)rABYL*TvO(5#KTq)O{ZVt z#hWdLgy)2R9WIZyLVJF|Ss5jB&l2mhyHDImAS~w?G zG`#H_9oOqc-S||FErk_^p8BO-RIWkH^JUs^N3=5)cb>D?6s=R*3qRZasQ*2UGW5>} z7G_Uj2`R?>qFfbP{4JvPhiN3HYwN?UL$C<8kFut~x9e&E@YO^=}*eEbzWuWd(HUgL$n|$49pM5P>Xc@ zdKR%mBdTb1P<=}()N6s&0zpfSL6HT9Ve+71=w~a|f%`Y~fbs9vC0-g5#;cB-qG+lS zDCo%`)}Z-!zm*&af>N?nB@5@|!3+$Hqlh*iZ5qOV6F(osp@GRB=RwN5$-={s$5*6&_(mu);%>z#IL37Gb==h%W-Cw`ZJw( z+DElUaLXa$=7a2aYK%I`0>N{W`@#pXXV(;S5-2cCtUyn9d6O=GZ_%NsG6dFSvjeFc z*@T?^@|GRnn&0hOVTD(PX`#HP0S`Y4r4bI`MNEG~$81?r+?4$Ai0>pVweIXupfr*# z=U1`&Y@*$Zn2BlOq|NQO2Vb`QvJ&zmAHnNtIu>8^hU*dqS`FG)FypFQ>PDUuz-(jA zee>kBQh_!e1GB#X8VOPgyk~sm?PG8MIy-cSN!2f1=|9>E`rc-i&a>3KS7)E&j!S$X z8f5Qi9(fK6(>+3)i*w)9d)NzSGgLe|-P_&l*GMiVZ*Qqt@xgkfdGdGTkcK{094g=@ z2d>4AKJPw>4kMi(Z}9^cs-}Ae&&{xf8~6^`;n;$%ywE_$&eA)1PalsRC9{;Ld#xld z^(23@k+=K(OAL@ua8y-FICvwiWl5=|H0IV1-eRL3@2_Mvqx-vD7!-ah90R_WFs3+v z!As)wKx680$mQ9`qd&<^9*-;ROcsCoqNh{;ebHZWC}a-(><;~aOGUD3W831=w^#tx zpNTIV)6#s;Y3B>BQv{zG9nHvR>su__srdf)ey}Sukw(gsz5Wx=7T8+(=E(e8E18lb zdoGly@De(joBFl=@=D{9Y%n(pN6-yp%@@8uAWiCM+K(RoZW?{@IBw8r0H$i z!tS`9z<3e6Od(*ZsE5Y}fw5#aGYz&18k~yXRAbPDt`hcQ3Qb1wjcFvfZfYw0C~7|n zo^%V0ThE1qgIm7oF0dhar{ihvo7MlPxv!3j^6UN`3zQU5I#dLd5TzRh0coT=q!|XJ z%K;2Tq@)!b5Rhg-V(3&tazMId^PIEK{=`1p zzxETW6c3>pMoA5%G65kBJ#*Z*7VjwenLM5jjs{JodfPf^g}%RAH}2^raa%LlGIKE2BmXZoDM&20OC4m{Y+oOqyfuUM2Rzky8t{{^wFFUN$m{v=I{55=>^xud!4czaRE< zyni1w4}5NA6=;C0{CG>S3gK%3Fad83rUnP`KDbm~Wh^02rFU2%G<#%Kx7GRE^qhLG z776VOQjk?xT3Uh3(&U4Z;;{ev1E=b}E6km~6~ohYMz9VA8x?CJf15j+`A+4(+V#__ zFMOiJX#4%T^S0d5czhxdkL;w4ZaRX!bX+vF$}lKW!=&vnk6+L-vk2=Z7>h0Zy^(j` z_!7PA?_sYDny_n^%vJ9TL`A?D^7H*aN(!Ic7xlFDFZ@})0*=NkJe!=#DJU*lRu--~ z9iyb?&7(qD|BRz6ASJj!FX&{mNov8P@q{`lJ=qstGD1Y)q2S0ZtF7>HlC{AqeQ+T$i=4xsq8mm!ECuJk8EQSQtgKzbh|>-PI_4MA zX=`IZ=Br0fmaL^a`a_8JPV7?$X9YXw`=f0U;n=^!=&)#5E zjAD9H`F)yeKaQ;Ht+23pZe_LZw=2%I{07|gWw?tstBo%Qvuk!Qy5$9r_-M1zCH^e@ zMi~>Q4a-gz*(s${kb|g2=ydqm5|DTJoeMZ|4l4NY#Nt){Yj^!@fbq?JP1tS_$fRKx zmZrOkDaIET=ZDXOfbnGO-;XLg`>m80Rv>70C*x*BR-P6Z@@RF9A+Ei<$h=wgXOcl5 z?45>p3k%c2()eI2h|BIdijPX2U$E78J^n!ao^vE_VLS$*5UxTa@fzW~SLog1_-^-uqiLv`QTk<}I*)Epo9NQofS}jV(6l)$C^L+kNw5^;*E4n&Ayp{S zBtZ}SckHdkSDb^1)pWh{zbjd>l)0OPcR42N+`Vojl*OZ}y6b=`qz`aUEOo}t((_n* zQ7gN`*TTtmx0lCj){Yi9RAis$A@A3nX9N4wH)z|h?wH7DCW;vupp%u{F zEgl#t=SBF!<^>Iigo##?AXIn%vvWBIZ=z73i-E@&Z68@c&4I`Y=P!$iMTcGL`N*{X zlA)R%IhJ7qQt(&3+sq%wry$Rcy1|itO+c}a=#qW*_djOVNa`3)jX)eUXY17x#4ck5ZI!5tEgVPt>zxcABH2;u<3AP97_- zm}~Iy7HZ^PS>FPf_;lPBFt?uhC4oSPd1BP;^l3qNwYe}XH2Gkt&t~W&V46x-52*u~jBjte zaBBKm3;$kJB%Rq?=<|~*##XFQA;-Cb{$-KasK<|=-KxcylsM71#RShRj6JOGI#MgQV7q5W zOR_Q=LXPozWqM}N*(UJp%Tv{c=8*)C6c~-9B}6Ek@fmPCNbO~eR7rJhyFbyh?VYbv zkOb{9Z$cEFJl1MMPi6K!`K_pg_9Y9vMq(33PZugz{~mK$E{)<%&wC~opR`v$+K5!4 zoLRORPf|rx?V*nA(9Ut#2&DsV==mOn^n2pR|E`SSB*!FetQ#s@Q5;OiA=8g7wZ{&- zb6A(J>)Wy++)9S3ai!I*TODM}KR^PVa4WUB5LGyJz;3hANbK}6@^fb*&cDsKK*}@? z4Pag&m>UqjOqJ|d%e=m2?;Rw$eubk&*fOpDFo^V&HSF)Y7+LmTtc8>dC-QV}wdCwn zpRTpgoEH0_zGr=s*|>7N*}@^|wG~KbGpUW=B6AOuCD_qliZ+Y*;BEHJ#`^j!!`r3? zuufN*_BMVSzXI=qlneP&L8fOOL+vy0w%wC?QkZw9On~D7Y_DlO?^t#XMl+3vV&|=wFXr483m&t8m=+PG1$lGxIoqgK=Yq~~_`rPEo?60sHImDh34z#Q} z*pTVGkqRB`8a;8Ew zrp#)a#2<#rVbp|VOl-T|6inW`0<{OTuHDNz=LMOZ_(lYA_)!1OfR4Y`w~ zOn^VNg=A@v*oNKaH*e9tjq!CmT79k~IP0#5hd8gRJ;IeySW{pGFbaUlnp&Ecmi~Rk zV+BK4Uux`o;C(PL_V9q9_tX60*m9BG(fx2nwP{O5I^gn1HA-t}8V*Kdycr{Ouyrm6 zRh06H*9Q+Y9%*ZAtPI+?hS)@f%FR?r%7t2^%SlUI%}Uh3J$|#Wbh|$-@T%^=u)yeX zWmRSGct4{-8R6G*jh4bFY1KF?*VK~t-=c17v?&T3jjekugrpdmDunC2$qZZd28XXZ zDZhdLke2CiR6s~T392lfxBlvwUyY#pT@1AWB8#{Any=Q<4yd}@Jls<28>vkE=oSTN z)brX`+RIG*=yL6`-Q(`M~*6+Hc- zxW9xjNY{9uS5lJ5+!ywhkQ4<;ObPKgL5+aMQLFA|Lk5G2;#Og-PQ&L!l zq$)==6fI7m1$*e{jd_VNUB3Y+8;IBOtHp~4gBaB+un%dbgou!%vm$SbEx+=0Z0T>J z4e%du%IWb~e_FlBBoxZ%Sen%=QKn|yo2*3ZY@Lfsw;lVntei`7c7Wcs$vUWQiKxz( zGt^8@Ngr?+v#vzHPkCcsA~H1Ed)84b<;)Rl5j>yEf*UGEZ|_*_O=v;`O>7g*GE_cm zIWce!!p_%E3?^om4Xiych)X zye~$sKMmi$ky)*?D5%@J0!o8&6t;3=sgLT^<9cbf8F;v#+bNzeTQNeb^g#7pmT0_h zjD#>IT$L`N(f^BDx$JyH-Fm2T6!iB6M2rhYcKQ(;Yt z5Fo#R88MHJ3sK857`Fz8Bove#Xxo6C<}1V>u=m0*x4ptjW%^C_aJ|cQGAdo3x$A+` zGjj^bOvb|%Lhk_6tD&xEsYr4@&!mnvz~dGYigbW3@s;!?WaZFkaj?n|8foUjhd!PH z;I^zhfqWZSn%$3e$OFGYQlj!xjoO6CxQby^XRzASi(ggL%@E1U!-8o&wgy-1Rc#Z$ zilXyELoc%cCRH7k+Be#p*5#eh+p=ZySXW=cMUf6F=r}W)UYDAiXNd_)rl6uzj{mhK zb~~jkk$QBCH&JASm<1%|pI(cbh#m95$7^VD)E!TgZyNQ*o8E8)3_-}>6x^*Rq#S|g z0ew*I;Lz~ehdwY?)J0%!cDV%Lm7ay23{wLO);XUG1~p7!CP8ve4~z>ctsgJPMf_0XbV2O%qO@B2JVPb3P;jCCiOjw6@`3iS=*xHy-W5Usu2gg}5QPjz(CI*pz)WS;_{}Tq zHA@4jsdR7X>`&K8Za6tKgTZ8F1H%QL8G3m&8@o3zVRCK&R@fRhH>15LsE;iDp%rR9 znqQ;hS4T#+>t2d-5i1+f$jo(Em+Wj+M6#8rJRmQjw@5?`G2OOhN}KiQ6~!f@z#gskCNuu$kZrjOH{Q)3oon-_eZ}alzw)bn zdboU?WMkXn`J>Agxrj1Oz|nF3LvFlhbADzxHSJ}E`~}!Uwp6&7X6nOV6M(GDci$}# zj^;0l0c|@p3WN|SJrks@nsIP$d(+2LxU-Af*Ya;Mp!6)^T9}P4Uye419CajmEfDoqwg3RZC=F;2!2xO11 zUWH&AP{e;d^V{DcU;FYhZTOUojO>y2&Qvn9=}V$=5Q@n$l)OzepfODv--;h zIa;2sle*z>t+2X!4V_bqE=(4}y>=qk~&pjux zz3X2?o}Tj>9h!n!=-R~EhMAyW{SYU+K1-qbvHkS5Q4uO(tJ2gYFz!-tkp4%nPZXGQ z;uHg)J;G7C;~?WoSNQ$9fBS(an@ z$h(5TQ%K0cfljx(IekRro&InIfCA4q^f+@}Cw^n8`;1kNS3D+&&B0M&Tt_p^Uvm}| z27g`N+{}GWzn6UbfsMcpt>Em!#+_cG{${;mXZi@K2aHyzVi`upT%E)@gk+eGp}F9| zAyEMkI}N~@cA|9wMZ78q8x0dxAh0W_DrIUuITAgJfp)Q)jvFDCS3E@uwX+v`+d?Ff z)yjjL93_p{XTR^cksT4a90=8I$a)P}rxyV|-?{d&IMcxAfnBOE7I@|vk}b*&m=-{S z3Ajq>Jz86Q2{G|

j3-^u{0^Af=7zGmH=dsXIPAtC((r2I@2vm2CBZrX$~NnZ|F; zp^dTX45n3W!nCGms0%Zea@#H+)(*3pOAzI}n`j|E0k5=iBXN?qP0V?_BDE~IiT+H~ z?wEg{q&y>Y_#w#gNV97bHW0eWRuj#2OD6(;TH4b5LOA}T=j8pLIFFTYmunQ*s43?S zAIXC5N-)*T_YWY`Iob?DnW@t9=_V8F^bQ2rqYD3{UMC~ppR3C77lL4(C9J0P>s)o6 z7a=$}nlkgBb0qaBD>eV53qim2v=|onIV^O{hQa-gD3p99ELgPOK0(_WI#d*8Gwl(X z&F$nB0GKDsze#tn`-jFt(}unSOU4XHbll$@Qef`CC0wgu%rdyKH$2S@4Wsv7y{TD* z`^iH^>09nm#45oTvKS%R-dM#n>GeCC4i-tVdS!F(f>~1=YUR+=AGrZI%Py4Am}Ab( zmvI`|cS_u~HF!1}@5P;*s)czVR9mXar7}zMPH%)c=3`6wGOaE9MhCA^%IUH$fsmkw zGN*)qh7=8oFh5Og&17?Us2mG4HRoCw=!!(LU?DlIKfbWsgTlnO@I*7$s?JzWoJ z!lzQUQlG!H2`TyKdY*3B5KS-a5G2887&Tq0DuHxt$=EXPV}eOBhZ~5^Ws;|_J60GV?>=Sk!dYFP89r>_`4mw}YKHqB%(V#T)_gNc zX@)iH@dDBaApA+jx7l$F<7%Hze^BiJWZL`@+52}&^R z_>A~z$P7OhP*6tgYTe0?QUQ9nVd|dCpQ_Ld1zJW_blbWhI*`m{66>U#p0ip zy!XD6k}_BU5{~-IKYK*u4J#~$Cgd<%OHnhm$CiI{v-%(=CN4$&uGh(M%L_N*G8E`0 zw|Q6`;ksNyuBnqh)l`Ao+DeTmEYRt*7^v~o&@lMH;6{P&IJ-N~2eMnEo<>n5;9UJ@4%pA> zmD$mQ*$7k)lU%i+U6}XkF#G+Zwx!N&Pfj(Tvdufl>X;_>nFH3Hi>$Fbx~%9PdoNK( z_7Jt^E$d8-J5>%?OCgVUeh~9iyA-~`M{giB-P>K0db*ObFBj$vyeDg;j^`)${09o% z4oo2(>B-I(za9{VdJfjfI*GzzI zdm54kUY!eN_GaYi@&gqpM}xGQz?CIuEJOonsM6H)VR9h|1a}JTb-DYru!;BK!{e9L z9pQ{)1>`Up15K0s$7#q%1%th!!U1-gY#YBi>>R={O4F^H!eSy1cLsv3u{1AK)JOkvB#hKO|GbF`!S6C!(Q2+z)S_Ryt20b*~iZ`j8Tc4EhyoR}7f;sKn zAnEkpIziv%=41gp0y|SRb^7ljcQeQQ{9}~Q1%fx{Gd*dh*0O6Z?lYm~Sm{R+QhP+3rC3BJP;5Lj(EM&$o|L6T-r8fbo1%9R zdEvtS=Xa)EX6>yy`KHE)crDL+5(Jtf^Ujjjyl)sVjn^TveFdWoOnhFYUYCA9@((`^ z-Xg5TV@A-i)`rT^VoE-tc(ow)8oJq8n=kD&8!PQR*-(%6U%m(UZlWf=566dLk3 zbYy!YbkchKc8K5z$P7yV?g2nAyxeNHIOc^t$*>=xFo!MDV5q2VY&10ZhDt2hZ`{zn zB$4E^FX@O7UVHwIGykTc3$NEyH`8w}J=73TbPvW4uT5DV=qG|V()xF|mVPg1u({t2iqebIPs++?D;Jtu#gfJ7I^x;z4A#Hd0JMF@ z|9o)Z&b0tpdXo6n&ivkC(%qes!Z}12oOWT#_1tZsZo~ffJ&-m!=clSNi`YMzUFM)$ z5o>1uh`sGI359z8^-AtPUwNwtMVAyl-WoSX5^^YYedHD4sNC~hEz}E3;Ba{LZ-x(M z24@Gzf~s9?BV=>wPgWVc3T4RO`5u@C01aE0_aEqkpF_G|e<(5*Fk{+3ltfeQ;Ktjc zD{vy(hKkiP%C1#Q5s`{OKprPx+%M;~Eq@DSzh+q{3#9H~`m<_0V_v40Fg?#Je@ zHqSJ~neQ|T+J8xm6|jK6p`D_9>hFo8`mnZH207W4xMYlHOLAnFW~($#<4V0k8}dE0L_19@CwFVPYm)uUs31=gDYj`DCBRiOmh zYNt!Y{69_@{Oy?Lh(q?TEBttjSExagM1cWWSknU>MDCD)w|<$ndUH9Ki3}OaMA&6SaY>@n&nPQuufjAf4Y_B!_!h5J+tK4TwH`_#|%5m1Ax2YWJuOWr>{!UCr2iJXZU-Tu%qV;lw|aKDlxlj~f{; z^hobjc5nM!xX?2GBPuz*H`kE;_$6H#$6nJ}4SEeu_`R^u8l)wg3vf?9aD3;=ii61Y zB~mf6{)jFOVjbA#KW zINy@%J56Md{&UgP$$-*#a<6_0+L%-v!Q^%d4pu`}y^?OE3n+Wly(T zV-+_jy7J$ORZK&D7m@X%%q`KX8|cF@$e1q{Dei?{MIDNJOpG5NVx`#;{YXaj zuV#yzb^EIY+8vd9U0dO}_zB(f4*+k|ktFcbO@j9}2|ZLGnfo*nuV?{=u%)^Auo~BU z?T;0CIeK7M_2m#sDx!3`!!t9ykV}9f%w!@)!3cZ|`G2-^*AAcbh`haesJt z#t9n`fs_EZb$>*3(gqrl+n*lGpWQxX(Q;1x?secaYjt1-o)C}#YzC6#>fcEcmIa(J zqL2FxU z6Cl}o_Oi0N&8=X8(kYV}8);Bf1g#%8xFm`<4fVmA_ud8tCJp-9-TNbWJ>kAZrdmCg zLa6r6-P(KPO06V?nxeN|^gI_b&(g_{Y&OxUOgkG`Z(7=$$p}$5!tDjZIiL)AeZtvD zXRTbcQP~d#SYS~kc3B(ignjfC@zXm@1>JGJzN?GMS8Q}Ed`hFlUZ&r^nQV2`WwpMx zg!Ib1*YA6P8u5_5L$}i(HGe3_)_sHM5@5(Obj!M|M{}7)zD^#WIv5xlZnO_3f9E%B^J%XWM2lxb=jT_AI8nUr9FPL5lU)=cR(*~op43stOoQu&g z)$(XgaB6yUL8M{jn+wN~yPTZt!0!hVL1^DTv;R@FrHo740#a+pB0C4$BF$4ea4%Ga_TKILVJT0P5pm(I8fU7~INn{26BpIe!A**tRu5 zZB$CumH%vkfDjvRIj#FI9R$|x_otq~IA2w{<>n;>yc&9mv28h_*TZA48?AJADZO+) z$IROJus}0eE^K+Af(0;^wj~Bhe(p?r7;j>WcjQfJv5B%XoE#jGZhMq97{?rs&d(E_ zB7pvsI^HA%p*aH@%&xl86F*S|)I?^qZ@SLw{9A=V5m9l^PzJ91A74R2t39qJ`|U6S z?TSI%CG^+(?*=m!G0hP)cWATI0X1PycCj7?)~k1 z{43(|8gu`-rVXb2-1EION!)|6pjUlL%Y_4UMcgjvX6 zoy3C9XVd$;nR7rT9`HsFPMm!Drg^W##dWUmD5D2B#0`+)<40r<&bFG4tIlqMp6AHn z*pCo!lq43Yk`;6BoPl0R;k$4**$?#=R4rf*7hDgw_d8-<)BrT9TkITdp*bNEZg1_1lXz=@sYJWAQ&M7?*8Mc@_?TWkf*o9>cY>D60t7;wkM{yhH~1RP zGO4isIi3n1YgqV3-|`i!2sg!wJ}2IM&n@10`dWONXzSPSIMbD@%d__Geukk=Ne>bp z`7>iq_mx!HGS34sUq#Nk^xon?3fvGBV%!ofaT;~lNV2l9=mY}4{86d)=o^+FBfQ@x z1eATf$TnK89h@Yi!W8-GNC?2mrv&v`P{Ki5`iGV`@q;;@ffV1Umq6((pPZr%AfPPF z>nw~~1*_u$xIS$7)Y#T~uQiSqn0dkQsdL0upB=iot$*KlMaqOIj3-QTfa3Ju_oR3w(Dn(}i@)*)AMz^Rv>Ti%LtMw|%z%wQ(gl zgL^1$2uv5Cc|Ob4!ZR0Pr4%xRsp@xsb+xjvt*IjmZP29E?$i4QP0p2a9ykBzb$`93 z)5Qvtaor%TIW?(#FS~{1wVL(<{wd$*&%NhXBkXrN+gg!Zx-!X{s%v&h@Kr_?hTw#?rw|lGfao z8Cq*EIOm0pDDHJdW|S7WkOBh^1orv@sIa|0Deq#xY6*R8eoI}u$+s<39*Uw5lXI(7 zgt%;V#B+H5U#JO8RGEiG&;I5xh?h--7yd7o?xVffQhD_p zctJ6)vUBwF);c^(OFPjrm%60z2yNe7yp;3R$k}??8ElYgEzwJ18!IPV{X?gn=561W z_1Aayz^1a>!YIf}*H3zfU|EU5Z@a0iv5`?aU1M{g9d8}Q<1EeAU?tA z&+NBf(wDAG9+~Pu)`#GG-8D*?)*-0$r+=Hy|DJM@ASZKn_Q)wi4;Rzch>P3bphA6(gxZcAA*2v~2vDZAx z(`)6_?nwz$UKL|Ub+#WO8;k;PrB=Jmrs;%u0;Df6%m)_zqMDn|{~wjKEp$%~{SQmp zxE}7ETw+iY4UCPw0m^YBrlYQXP_lP|?YTKwDJGb3{0WB4Z{e$!4rYx{_ZrRvWX%c$ zEh+A^Q3q2{K4$Vof>??fM>n{-y820C*(Aw-3+}R|y7eOwmpB4yy~{n!Oo6FwkRG3B z-&>>w;slVsnT%N~%sU>0V9>*`Ubv4Nz|^&v+BL^~x3HB*z|RJ<>xZm#@??w-ibRg1 z74g8vkkIVq?RAx5QVo>LSz})wMt=UHsKR=xcN;H10mwUOTLugsUd<)e3iv|vUyxeE zQgD#0M=>Z^1DRZ7>VYdgL#F!OPf~lb8B&KNw4aAxR=>Ey%wwdHrUsI#Vw+B3+>nbJ zw%BD{4gzMzp47oh1Y9am*Jb(cG2h{2k1yJE zIfmC<+1^X@#fB@_nmZtF#LJtA(J=F<0UI`7OL+356s_)bi;HEd4Dvc2_n#T26$%vZ z439Ju1<<#I-MpR07~*dv2V@XBB$Yq26HsD)tO}{sx3%Ro?14JWA!lpi^(^x@Ts}A; zn%!bsy9$&8P?`Kyi{n{#O!d){E0^&pyMICOZEikdaK1sMoZR}X4Bx)jR2gtBd}!-f z68oJ6igBJiAERUu8yre;|GZz#HMYlEhNk z=!B%;urL9@J}&^Iv^VBQNw&MATSBEv@aIec#mmYc8s49kS+tyofIn}Jk%Wb15Aa_H zsPO;wUx~IC@!t{bxr~2uNTt9(4LxDOKYb6D!#|l(hT)&;+33!Jr~kJ Date: Wed, 8 Nov 2023 06:38:24 -0500 Subject: [PATCH 004/141] ready to record --- .gitignore | 1 + OpenAI_Documentation.md | 1080 +++++++++++++++++++++++++++++++++++++++ 2 files changed, 1081 insertions(+) create mode 100644 .gitignore create mode 100644 OpenAI_Documentation.md diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000..60c16c3 --- /dev/null +++ b/.gitignore @@ -0,0 +1 @@ +key_openai.txt \ No newline at end of file diff --git a/OpenAI_Documentation.md b/OpenAI_Documentation.md new file mode 100644 index 0000000..e873bc6 --- /dev/null +++ b/OpenAI_Documentation.md @@ -0,0 +1,1080 @@ +## Function calling +Learn how to connect large language models to external tools. + +Introduction +In an API call, you can describe functions and have the model intelligently choose to output a JSON object containing arguments to call one or many functions. The Chat Completions API does not call the function; instead, the model generates JSON that you can use to call the function in your code. + +The latest models (gpt-3.5-turbo-1006 and gpt-4-1106-preview) have been trained to both detect when a function should to be called (depending on the input) and to respond with JSON that adheres to the function signature more closely than previous models. With this capability also comes potential risks. We strongly recommend building in user confirmation flows before taking actions that impact the world on behalf of users (sending an email, posting something online, making a purchase, etc). + +This guide is focused on function calling with the Chat Completions API, for details on function calling in the Assistants API, please see the Assistants Tools page. +Common use cases +Function calling allows you to more reliably get structured data back from the model. For example, you can: + +Create assistants that answer questions by calling external APIs (e.g. like ChatGPT Plugins) +e.g. define functions like send_email(to: string, body: string), or get_current_weather(location: string, unit: 'celsius' | 'fahrenheit') +Convert natural language into API calls +e.g. convert "Who are my top customers?" to get_customers(min_revenue: int, created_before: string, limit: int) and call your internal API +Extract structured data from text +e.g. define a function called extract_data(name: string, birthday: string), or sql_query(query: string) +...and much more! + +The basic sequence of steps for function calling is as follows: + +Call the model with the user query and a set of functions defined in the functions parameter. +The model can choose to call one or more functions; if so, the content will be a stringified JSON object adhering to your custom schema (note: the model may hallucinate parameters). +Parse the string into JSON in your code, and call your function with the provided arguments if they exist. +Call the model again by appending the function response as a new message, and let the model summarize the results back to the user. +Supported models +Not all model versions are trained with function calling data. Function calling is supported with the following models: + +gpt-4 +gpt-4-1106-preview +gpt-4-0613 +gpt-3.5-turbo +gpt-3.5-turbo-1106 +gpt-3.5-turbo-0613 +In addition, parallel function calls is supported on the following models: + +gpt-4-1106-preview +gpt-3.5-turbo-1106 +Parallel function calling +Parallel function call is helpful for cases where you want to call multiple functions in one turn. For example, you may want to call functions to get the weather in 3 different locations at the same time. In this case, the model will call multiple functions in a single response. And you can pass back the results of each function call by referencing the tool_call_id in the response matching the ID of each tool call. + +In this example, we define a single function get_current_weather. The model calls the function multiple times, and after sending the function response back to the model, we let it decide the next step. It responded with a user-facing message which was telling the user the temperature in Boston, San Francisco, and Tokyo. Depending on the query, it may choose to call a function again. + +If you want to force the model to call a specific function you can do so by setting tool_choice with a specific function name. You can also force the model to generate a user-facing message by setting tool_choice: "none". Note that the default behavior (tool_choice: "auto") is for the model to decide on its own whether to call a function and if so which function to call. + +Example with one function called in parallel + +```python +python +import openai +import json + +# Example dummy function hard coded to return the same weather +# In production, this could be your backend API or an external API +def get_current_weather(location, unit="fahrenheit"): + """Get the current weather in a given location""" + if "tokyo" in location.lower(): + return json.dumps({"location": location, "temperature": "10", "unit": "celsius"}) + elif "san francisco" in location.lower(): + return json.dumps({"location": location, "temperature": "72", "unit": "fahrenheit"}) + else: + return json.dumps({"location": location, "temperature": "22", "unit": "celsius"}) + +def run_conversation(): + # Step 1: send the conversation and available functions to the model + messages = [{"role": "user", "content": "What's the weather like in San Francisco, Tokyo, and Paris?"}] + tools = [ + { + "type": "function", + "function": { + "name": "get_current_weather", + "description": "Get the current weather in a given location", + "parameters": { + "type": "object", + "properties": { + "location": { + "type": "string", + "description": "The city and state, e.g. San Francisco, CA", + }, + "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}, + }, + "required": ["location"], + }, + }, + } + ] + response = openai.chat.completions.create( + model="gpt-3.5-turbo-1106", + messages=messages, + tools=tools, + tool_choice="auto", # auto is default, but we'll be explicit + ) + response_message = response.choices[0].message + tool_calls = response_message.tool_calls + # Step 2: check if the model wanted to call a function + if tool_calls: + # Step 3: call the function + # Note: the JSON response may not always be valid; be sure to handle errors + available_functions = { + "get_current_weather": get_current_weather, + } # only one function in this example, but you can have multiple + messages.append(response_message) # extend conversation with assistant's reply + # Step 4: send the info for each function call and function response to the model + for tool_call in tool_calls: + function_name = tool_call.function.name + function_to_call = available_functions[function_name] + function_args = json.loads(tool_call.function.arguments) + function_response = function_to_call( + location=function_args.get("location"), + unit=function_args.get("unit"), + ) + messages.append( + { + "tool_call_id": tool_call.id, + "role": "tool", + "name": function_name, + "content": function_response, + } + ) # extend conversation with function response + second_response = openai.chat.completions.create( + model="gpt-3.5-turbo-1106", + messages=messages, + ) # get a new response from the model where it can see the function response + return second_response +print(run_conversation()) +``` +You can find more examples of function calling in the OpenAI cookbook: +Function calling +Learn from more examples demonstrating function calling +Tokens +Under the hood, functions are injected into the system message in a syntax the model has been trained on. This means functions count against the model's context limit and are billed as input tokens. If running into context limits, we suggest limiting the number of functions or the length of documentation you provide for function parameters. + +It is also possible to use fine-tuning to reduce the number of tokens used if you have many functions defined. + + + + + + + + + +##Text generation models +New capabilities launched at DevDay + +Text generation models are now capable of JSON mode and Reproducible outputs. We also launched the Assistants API to enable you to build agent-like experiences on top of our text-generation models. +OpenAI's text generation models (often called generative pre-trained transformers or large language models) have been trained to understand natural language, code, and images. The models provide text outputs in response to their inputs. The inputs to these models are also referred to as "prompts". Designing a prompt is essentially how you “program” a large language model model, usually by providing instructions or some examples of how to successfully complete a task. + +Using OpenAI's text generation models, you can build applications to: + +Draft documents +Write computer code +Answer questions about a knowledge base +Analyze texts +Give software a natural language interface +Tutor in a range of subjects +Translate languages +Simulate characters for games +With the release of gpt-4-vision-preview, you can now build systems that also process and understand images. + +Explore GPT-4 with image inputs +Check out the vision guide for more detail. +To use one of these models via the OpenAI API, you’ll send a request containing the inputs and your API key, and receive a response containing the model’s output. Our latest models, gpt-4 and gpt-3.5-turbo, are accessed through the chat completions API endpoint. + +MODEL FAMILIES API ENDPOINT +Newer models (2023–) gpt-4, gpt-3.5-turbo https://api.openai.com/v1/chat/completions +Updated base models (2023) babbage-002, davinci-002 https://api.openai.com/v1/completions +Legacy models (2020–2022) text-davinci-003, text-davinci-002, davinci, curie, babbage, ada https://api.openai.com/v1/completions +You can experiment with various models in the chat playground. If you’re not sure which model to use, then use gpt-3.5-turbo or gpt-4. + +Chat Completions API +Chat models take a list of messages as input and return a model-generated message as output. Although the chat format is designed to make multi-turn conversations easy, it’s just as useful for single-turn tasks without any conversation. + +An example Chat Completions API call looks like the following: + +```python +from openai import OpenAI +client = OpenAI() + +response = client.chat.completions.create( + model="gpt-3.5-turbo", + messages=[ + {"role": "system", "content": "You are a helpful assistant."}, + {"role": "user", "content": "Who won the world series in 2020?"}, + {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."}, + {"role": "user", "content": "Where was it played?"} + ] +) +``` +To learn more, you can view the full API reference documentation for the Chat API. + +The main input is the messages parameter. Messages must be an array of message objects, where each object has a role (either "system", "user", or "assistant") and content. Conversations can be as short as one message or many back and forth turns. + +Typically, a conversation is formatted with a system message first, followed by alternating user and assistant messages. + +The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. However note that the system message is optional and the model’s behavior without a system message is likely to be similar to using a generic message such as "You are a helpful assistant." + +The user messages provide requests or comments for the assistant to respond to. Assistant messages store previous assistant responses, but can also be written by you to give examples of desired behavior. + +Including conversation history is important when user instructions refer to prior messages. In the example above, the user’s final question of "Where was it played?" only makes sense in the context of the prior messages about the World Series of 2020. Because the models have no memory of past requests, all relevant information must be supplied as part of the conversation history in each request. If a conversation cannot fit within the model’s token limit, it will need to be shortened in some way. + +To mimic the effect seen in ChatGPT where the text is returned iteratively, set the stream parameter to true. +Chat Completions response format +An example Chat Completions API response looks as follows: + +```json +{ + "choices": [ + { + "finish_reason": "stop", + "index": 0, + "message": { + "content": "The 2020 World Series was played in Texas at Globe Life Field in Arlington.", + "role": "assistant" + } + } + ], + "created": 1677664795, + "id": "chatcmpl-7QyqpwdfhqwajicIEznoc6Q47XAyW", + "model": "gpt-3.5-turbo-0613", + "object": "chat.completion", + "usage": { + "completion_tokens": 17, + "prompt_tokens": 57, + "total_tokens": 74 + } +} +``` +The assistant’s reply can be extracted with: + +```python +response['choices'][0]['message']['content'] +``` + +Every response will include a finish_reason. The possible values for finish_reason are: + +stop: API returned complete message, or a message terminated by one of the stop sequences provided via the stop parameter +length: Incomplete model output due to max_tokens parameter or token limit +function_call: The model decided to call a function +content_filter: Omitted content due to a flag from our content filters +null: API response still in progress or incomplete +Depending on input parameters, the model response may include different information. + +JSON mode New +A common way to use Chat Completions is to instruct the model to always return JSON in some format that makes sense for your use case, by providing a system message. This works well, but occasionally the models may generate output that does not parse to valid JSON. + +To prevent these errors and improve model performance, when calling gpt-4-1106-preview or gpt-3.5-turbo-1106, you can set response_format to { type: "json_object" } to enable JSON mode. When JSON mode is enabled, the model is constrained to only generate strings that parse into valid JSON. + +Important notes: + +When using JSON mode, always instruct the model to produce JSON via some message in the conversation, for example via your system message. If you don't include an explicit instruction to generate JSON, the model may generate an unending stream of whitespace and the request may run continually until it reaches the token limit. To help ensure you don't forget, the API will throw an error if the string "JSON" does not appear somewhere in the context. +The JSON in the message the model returns may be partial (i.e. cut off) if finish_reason is length, which indicates the generation exceeded max_tokens or the conversation exceeded the token limit. To guard against this, check finish_reason before parsing the response. +JSON mode will not guarantee the output matches any specific schema, only that it is valid and parses without errors. +Note that JSON mode is always enabled when the model is generating arguments as part of function calling. + +Reproducible outputs Beta +Chat Completions are non-deterministic by default (which means model outputs may differ from request to request). That being said, we offer some control towards deterministic outputs by giving you access to the seed parameter and the system_fingerprint response field. + +To receive (mostly) deterministic outputs across API calls, you can: + +Set the seed parameter to any integer of your choice and use the same value across requests you'd like deterministic outputs for. +Ensure all other parameters (like prompt or temperature) are the exact same across requests. +Sometimes, determinism may be impacted due to necessary changes OpenAI makes to model configurations on our end. To help you keep track of these changes, we expose the system_fingerprint field. If this value is different, you may see different outputs due to changes we've made on our systems. + +Deterministic outputs +Explore the new seed parameter in the OpenAI cookbook +Managing tokens +Language models read and write text in chunks called tokens. In English, a token can be as short as one character or as long as one word (e.g., a or apple), and in some languages tokens can be even shorter than one character or even longer than one word. + +For example, the string "ChatGPT is great!" is encoded into six tokens: ["Chat", "G", "PT", " is", " great", "!"]. + +The total number of tokens in an API call affects: + +How much your API call costs, as you pay per token +How long your API call takes, as writing more tokens takes more time +Whether your API call works at all, as total tokens must be below the model’s maximum limit (4097 tokens for gpt-3.5-turbo) +Both input and output tokens count toward these quantities. For example, if your API call used 10 tokens in the message input and you received 20 tokens in the message output, you would be billed for 30 tokens. Note however that for some models the price per token is different for tokens in the input vs. the output (see the pricing page for more information). + +To see how many tokens are used by an API call, check the usage field in the API response (e.g., response['usage']['total_tokens']). + +Chat models like gpt-3.5-turbo and gpt-4 use tokens in the same way as the models available in the completions API, but because of their message-based formatting, it's more difficult to count how many tokens will be used by a conversation. + +DEEP DIVE +Counting tokens for chat API calls +To see how many tokens are in a text string without making an API call, use OpenAI’s tiktoken Python library. Example code can be found in the OpenAI Cookbook’s guide on how to count tokens with tiktoken. + +Each message passed to the API consumes the number of tokens in the content, role, and other fields, plus a few extra for behind-the-scenes formatting. This may change slightly in the future. + +If a conversation has too many tokens to fit within a model’s maximum limit (e.g., more than 4097 tokens for gpt-3.5-turbo), you will have to truncate, omit, or otherwise shrink your text until it fits. Beware that if a message is removed from the messages input, the model will lose all knowledge of it. + +Note that very long conversations are more likely to receive incomplete replies. For example, a gpt-3.5-turbo conversation that is 4090 tokens long will have its reply cut off after just 6 tokens. + +Parameter details +Frequency and presence penalties + +The frequency and presence penalties found in the Chat Completions API and Legacy Completions API can be used to reduce the likelihood of sampling repetitive sequences of tokens. They work by directly modifying the logits (un-normalized log-probabilities) with an additive contribution. + +``` +mu[j] -> mu[j] - c[j] * alpha_frequency - float(c[j] > 0) * alpha_presence +``` + +Where: + +mu[j] is the logits of the j-th token +c[j] is how often that token was sampled prior to the current position +float(c[j] > 0) is 1 if c[j] > 0 and 0 otherwise +alpha_frequency is the frequency penalty coefficient +alpha_presence is the presence penalty coefficient +As we can see, the presence penalty is a one-off additive contribution that applies to all tokens that have been sampled at least once and the frequency penalty is a contribution that is proportional to how often a particular token has already been sampled. + +Reasonable values for the penalty coefficients are around 0.1 to 1 if the aim is to just reduce repetitive samples somewhat. If the aim is to strongly suppress repetition, then one can increase the coefficients up to 2, but this can noticeably degrade the quality of samples. Negative values can be used to increase the likelihood of repetition. + +Completions API Legacy +The completions API endpoint received its final update in July 2023 and has a different interface than the new chat completions endpoint. Instead of the input being a list of messages, the input is a freeform text string called a prompt. + +An example API call looks as follows: + +```python +from openai import OpenAI +client = OpenAI() + +response = client.completions.create( + model="gpt-3.5-turbo-instruct", + prompt="Write a tagline for an ice cream shop." +) +``` +See the full API reference documentation to learn more. + +Token log probabilities +The completions API can provide a limited number of log probabilities associated with the most likely tokens for each output token. This feature is controlled by using the logprobs field. This can be useful in some cases to assess the confidence of the model in its output. + +Inserting text +The completions endpoint also supports inserting text by providing a suffix in addition to the standard prompt which is treated as a prefix. This need naturally arises when writing long-form text, transitioning between paragraphs, following an outline, or guiding the model towards an ending. This also works on code, and can be used to insert in the middle of a function or file. + +DEEP DIVE +Inserting text +Completions response format +An example completions API response looks as follows: + +```json +{ + "choices": [ + { + "finish_reason": "length", + "index": 0, + "logprobs": null, + "text": "\n\n\"Let Your Sweet Tooth Run Wild at Our Creamy Ice Cream Shack" + } + ], + "created": 1683130927, + "id": "cmpl-7C9Wxi9Du4j1lQjdjhxBlO22M61LD", + "model": "gpt-3.5-turbo-instruct", + "object": "text_completion", + "usage": { + "completion_tokens": 16, + "prompt_tokens": 10, + "total_tokens": 26 + } +} +``` +In Python, the output can be extracted with response['choices'][0]['text']. + +The response format is similar to the response format of the Chat Completions API but also includes the optional field logprobs. + +Chat Completions vs. Completions +The Chat Completions format can be made similar to the completions format by constructing a request using a single user message. For example, one can translate from English to French with the following completions prompt: + +``` +Translate the following English text to French: "{text}" +``` + +And an equivalent chat prompt would be: + +``` +[{"role": "user", "content": 'Translate the following English text to French: "{text}"'}] +``` + +Likewise, the completions API can be used to simulate a chat between a user and an assistant by formatting the input accordingly. + +The difference between these APIs is the underlying models that are available in each. The chat completions API is the interface to our most capable model (gpt-4), and our most cost effective model (gpt-3.5-turbo). + +Which model should I use? +We generally recommend that you use either gpt-4 or gpt-3.5-turbo. Which of these you should use depends on the complexity of the tasks you are using the models for. gpt-4 generally performs better on a wide range of evaluations. In particular, gpt-4 is more capable at carefully following complex instructions. By contrast gpt-3.5-turbo is more likely to follow just one part of a complex multi-part instruction. gpt-4 is less likely than gpt-3.5-turbo to make up information, a behavior known as "hallucination". gpt-4 also has a larger context window with a maximum size of 8,192 tokens compared to 4,096 tokens for gpt-3.5-turbo. However, gpt-3.5-turbo returns outputs with lower latency and costs much less per token. + +We recommend experimenting in the playground to investigate which models provide the best price performance trade-off for your usage. A common design pattern is to use several distinct query types which are each dispatched to the model appropriate to handle them. + +Prompt engineering +An awareness of the best practices for working with OpenAI models can make a significant difference in application performance. The failure modes that each exhibit and the ways of working around or correcting those failure modes are not always intuitive. There is an entire field related to working with language models which has come to be known as "prompt engineering", but as the field has progressed its scope has outgrown merely engineering the prompt into engineering systems that use model queries as components. To learn more, read our guide on prompt engineering which covers methods to improve model reasoning, reduce the likelihood of model hallucinations, and more. You can also find many useful resources including code samples in the OpenAI Cookbook. + +FAQ +How should I set the temperature parameter? +Lower values for temperature result in more consistent outputs, while higher values generate more diverse and creative results. Select a temperature value based on the desired trade-off between coherence and creativity for your specific application. + +Is fine-tuning available for the latest models? +Yes, for some. Currently, you can only fine-tune gpt-3.5-turbo and our updated base models (babbage-002 and davinci-002). See the fine-tuning guide for more details on how to use fine-tuned models. + +Do you store the data that is passed into the API? +As of March 1st, 2023, we retain your API data for 30 days but no longer use your data sent via the API to improve our models. Learn more in our data usage policy. Some endpoints offer zero retention. + +How can I make my application more safe? +If you want to add a moderation layer to the outputs of the Chat API, you can follow our moderation guide to prevent content that violates OpenAI’s usage policies from being shown. + +Should I use ChatGPT or the API? +ChatGPT offers a chat interface to the models in the OpenAI API and a range of built-in features such as integrated browsing, code execution, plugins, and more. By contrast, using OpenAI’s API provides more flexibility. + + + + + + + + + + + + +## Assistants API Beta +The Assistants API allows you to build AI assistants within your own applications. An Assistant has instructions and can leverage models, tools, and knowledge to respond to user queries. The Assistants API currently supports three types of tools: Code Interpreter, Retrieval, and Function calling. In the future, we plan to release more OpenAI-built tools, and allow you to provide your own tools on our platform. + +You can explore the capabilties of the Assitants API using the Assistants playground or by building a step-by-step integration outlined in this guide. At a high level, a typical integration of the Assistants API has the following flow: + +Create an Assistant in the API by defining it custom instructions and picking a model. If helpful, enable tools like Code Interpreter, Retrieval, and Function calling. +Create a Thread when a user starts a conversation. +Add Messages to the Thread as the user ask questions. +Run the Assistant on the Thread to trigger responses. This automatically calls the relevant tools. +The Assistants API is in beta and we are actively working on adding more functionality. Share your feedback in our Developer Forum! +This starter guide walks through the key steps to create and run an Assistant that uses Code Interpreter. + +Step 1: Create an Assistant +An Assistant represents an entity that can be configured to respond to users’ Messages using several parameters like: + +Instructions: how the Assistant and model should behave or respond +Model: you can specify any GPT-3.5 or GPT-4 models, including fine-tuned models. The Retrieval tool requires gpt-3.5-turbo-1106 and gpt-4-1106-preview models. +Tools: the API supports Code Interpreter and Retrieval that are built and hosted by OpenAI. +Functions: the API allows you to define custom function signatures, with similar behavior as our function calling feature. +In this example, we're creating an Assistant that is a personal math tutor, with the Code Interpreter tool enabled: + +Calls to the Assistants API require that you pass a beta HTTP header. This is handled automatically if you’re using OpenAI’s official Python and Node.js SDKs. +OpenAI-Beta: assistants=v1 + + +```python +assistant = client.beta.assistants.create( + name="Math Tutor", + instructions="You are a personal math tutor. Write and run code to answer math questions.", + tools=[{"type": "code_interpreter"}], + model="gpt-4-1106-preview" +) +Step 2: Create a Thread +A Thread represents a conversation. We recommend creating one Thread per user as soon as the user initiates the conversation. Pass any user-specific context and files in this thread by creating Messages. +``` + + +thread = client.beta.threads.create() +Threads don’t have a size limit. You can pass as many Messages as you want to a Thread. The API will ensure that requests to the model fit within the maximum context window, using relevant optimization techniques such as truncation. + +Step 3: Add a Message to a Thread +A Message contains the user's text, and optionally, any files that the user uploads. Image files aren't supported today, but we plan to add support for them in the coming months. + +```python +message = client.beta.threads.messages.create( + thread_id=thread.id, + role="user", + content="I need to solve the equation `3x + 11 = 14`. Can you help me?" +) +``` +Now if you list Messages in Thread, you will see that this message is added to the thread on creation: + +```json +{ + "object": "list", + "data": [ + { + "created_at": 1696995451, + "id": "msg_4rb1Skx3XgQZEe4PHVRFQhr0", + "object": "thread.message", + "thread_id": "thread_34p0sfdas0823smfv", + "role": "user", + "content": [{ + "type": "text", + "text": { + "value": "I need to solve the equation `3x + 11 = 14`. Can you help me?", + "annotations": [] + } + }], + ... +``` +Step 4: Run the Assistant +For the Assistant to respond to the user message, you need to create a Run. This makes the Assistant read the Thread and decide whether to call tools or simply use the model to best answer the user query. As the run progresses, the assistant appends Messages to the thread with the role="assistant" . + +You can optionally pass additional instructions to the Assistant while creating the Run: + + + +```python +run = client.beta.threads.runs.create( + thread_id=thread.id, + assistant_id=assistant.id, + instructions="Please address the user as Jane Doe. The user has a premium account." +) +``` +Step 5: Display the Assistant's Response +This creates a Run in a queued status. You can periodically retrieve the Run to check on its status to see if it has moved to completed. + + +```python +run = client.beta.threads.runs.retrieve( + thread_id=thread.id, + run_id=run.id +) +``` +Once the Run completes, you can retrieve the Messages added by the Assistant to the Thread. + + +```python +messages = client.beta.threads.messages.list( + thread_id=thread.id +) +``` +And finally, display them to the user! During this Run, the Assistant added two new Messages to the Thread. + +ROLE CONTENT +user I need to solve the equation 3x + 11 = 14. Can you help me? +assistant Certainly, Jane Doe. To solve the equation (3x + 11 = 14) for (x), you'll want to isolate (x) on one side of the equation. Here's how you can do that: +Subtract 11 from both sides of the equation to get (3x = 3). +Then, divide both sides by 3 to solve for (x). +Let me calculate the value of (x) for you. +assistant The solution to the equation (3x + 11 = 14) is (x = 1). +You can also retrieve the Run Steps of this Run if you'd like to explore or display the inner workings of the Assistant and its tools. + + +## How Assistants work Beta +The Assistants API is designed to help developers build powerful AI assistants capable of performing a variety of tasks. + +The Assistants API is in beta and we are actively working on adding more functionality. Share your feedback in our Developer Forum! +Assistants can call OpenAI’s models with specific instructions to tune their personality and capabilities. +Assistants can access multiple tools in parallel. These can be both OpenAI-hosted tools — like Code interpreter and Knowledge retrieval — or tools you build / host (via Function calling). +Assistants can access persistent Threads. Threads simplify AI application development by storing message history and truncating it when the conversation gets too long for the model’s context length. You create a Thread once, and simply append Messages to it as your users reply. +Assistants can access Files in several formats — either as part of their creation or as part of Threads between Assistants and users. When using tools, Assistants can also create files (e.g., images, spreadsheets, etc) and cite files they reference in the Messages they create. +Objects +Assistants object architecture diagram + +OBJECT WHAT IT REPRESENTS +Assistant Purpose-built AI that uses OpenAI’s models and calls tools +Thread A conversation session between an Assistant and a user. Threads store Messages and automatically handle truncation to fit content into a model’s context. +Message A message created by an Assistant or a user. Messages can include text, images, and other files. Messages stored as a list on the Thread. +Run An invocation of an Assistant on a Thread. The Assistant uses it’s configuration and the Thread’s Messages to perform tasks by calling models and tools. As part of a Run, the Assistant appends Messages to the Thread. +Run Step A detailed list of steps the Assistant took as part of a Run. An Assistant can call tools or create Messages during it’s run. Examining Run Steps allows you to introspect how the Assistant is getting to it’s final results. +Creating Assistants +We recommend using OpenAI’s latest models with the Assistants API for best results and maximum compatibility with tools. +To get started, creating an Assistant only requires specifying the model to use. But you can further customize the behavior of the Assistant: + +Use the instructions parameter to guide the personality of the Assistant and define it’s goals. Instructions are similar to system messages in the Chat Completions API. +Use the tools parameter to give the Assistant access to up to 128 tools. You can give it access to OpenAI-hosted tools like code_interpreter and retrieval, or call a third-party tools via a function calling. +Use the file_ids parameter to give the tools like code_interpreter and retrieval access to files. Files are uploaded using the File upload endpoint and must have the purpose set to assistants to be used with this API. +For example, to create an Assistant that can create data visualization based on a .csv file, first upload a file. + +```python +file = client.files.create( + file=open("speech.py", "rb"), + purpose='assistants' +) +``` +And then create the Assistant with the uploaded file. + +```python +assistant = client.beta.assistants.create( + name="Data visualizer", + description="You are great at creating beautiful data visualizations. You analyze data present in .csv files, understand trends, and come up with data visualizations relevant to those trends. You also share a brief text summary of the trends observed.", + model="gpt-4-1106-preview", + tools=[{"type": "code_interpreter"}], + file_ids=[file.id] +) +``` +You can attach a maximum of 20 files per Assistant, and they can be at most 512 MB each. In addition, the size of all the files uploaded by your organization should not exceed 100GB. You can request an increase in this storage limit using our help center. + +You can also use the AssistantFile object to create, delete, or view associations between Assistant and File objects. Note that deleting an AssistantFile doesn’t delete the original File object, it simply deletes the association between that File and the Assistant. To delete a File, use the File delete endpoint instead. + +Managing Threads and Messages +Threads and Messages represent a conversation session between an Assistant and a user. There is no limit to the number of Messages you can store in a Thread. Once the size of the Messages exceeds the context window of the model, the Thread smartly truncates them to fit. You can create a Thread with an initial list of Messages like this: + + +```python +thread = client.beta.threads.create( + messages=[ + { + "role": "user", + "content": "Create 3 data visualizations based on the trends in this file.", + "file_ids": [file.id] + } + ] +) +``` +Messages can contain text, images, or files. At the moment, user-created Messages cannot contain image files but we plan to add support for this in the future. + +Message annotations + +Messages created by Assistants may contain annotations within the content array of the object. Annotations provide information around how you should annotate the text in the Message. + +There are two types of Annotations: + +file_citation: File citations are created by the retrieval tool and define references to a specific quote in a specific file that was uploaded and used by the Assistant to generate the response. +file_path: File path annotations are created by the code_interpreter tool and contain references to the files generated by the tool. +When annotations are present in the Message object, you'll see illegible model-generated substrings in the text that you should replace with the annotations. These strings may look something like 【13†source】 or sandbox:/mnt/data/file.csv. Here’s an example python code snippet that replaces these strings with information present in the annotations. + +```python +# Retrieve the message object +message = client.beta.threads.messages.retrieve( + thread_id="...", + message_id="..." +) + +# Extract the message content +message_content = message.content[0].text +annotations = message_content.annotations +citations = [] + +# Iterate over the annotations and add footnotes +for index, annotation in enumerate(annotations): + # Replace the text with a footnote + message_content.value = message_content.value.replace(annotation.text, f' [{index}]') + + # Gather citations based on annotation attributes + if (file_citation := getattr(annotation, 'file_citation', None)): + cited_file = client.files.retrieve(file_citation.file_id) + citations.append(f'[{index}] {file_citation.quote} from {cited_file.filename}') + elif (file_path := getattr(annotation, 'file_path', None)): + cited_file = client.files.retrieve(file_path.file_id) + citations.append(f'[{index}] Click to download {cited_file.filename}') + # Note: File download functionality not implemented above for brevity + +# Add footnotes to the end of the message before displaying to user +message_content.value += '\n' + '\n'.join(citations) +``` +Runs and Run Steps +When you have all the context you need from your user in the Thread, you can run the Thread with an Assistant of your choice. + + +```python +run = client.beta.threads.runs.create( + thread_id=thread.id, + assistant_id=assistant.id +) +``` +By default, a Run will use the model and tools configuration specified in Assistant object, but you can override most of these when creating the Run for added flexibility: + + +```python +run = client.beta.threads.runs.create( + thread_id=thread.id, + assistant_id=assistant.id, + model="gpt-4-1106-preview", + instructions="additional instructions", + tools=[{"type": "code_interpreter"}, {"type": "retrieval"}] +) +``` +Note: file_ids associated with the Assistant cannot be overridden during Run creation. You must use the modify Assistant endpoint to do this. + +Run lifecycle + +Run objects can have multiple statuses. + +Run lifecycle - diagram showing possible status transitions + +STATUS DEFINITION +queued When Runs are first created or when you complete the required_action, they are moved to a queued status. They should almost immediately move to in_progress. +in_progress While in_progress, the Assistant uses the model and tools to perform steps. You can view progress being made by the Run by examining the Run Steps. +completed The Run successfully completed! You can now view all Messages the Assistant added to the Thread, and all the steps the Run took. You can also continue the conversation by adding more user Messages to the Thread and creating another Run. +requires_action When using the Function calling tool, the Run will move to a required_action state once the model determines the names and arguments of the functions to be called. You must then run those functions and submit the outputs before the run proceeds. If the outputs are not provided before the expires_at timestamp passes (roughly 10 mins past creation), the run will move to an expired status. +expired This happens when the function calling outputs were not submitted before expires_at and the run expires. Additionally, if the runs take too long to execute and go beyond the time stated in expires_at, our systems will expire the run. +cancelling You can attempt to cancel an in_progress run using the Cancel Run endpoint. Once the attempt to cancel succeeds, status of the Run moves to cancelled. Cancellation is attempted but not guaranteed. +cancelled Run was successfully cancelled. +failed You can view the reason for the failure by looking at the last_error object in the Run. The timestamp for the failure will be recorded under failed_at. +Polling for updates + +In order to keep the status of your run up to date, you will have to periodically retrieve the Run object. You can check the status of the run each time you retrieve the object to determine what your application should do next. We plan to add support for streaming to make this simpler in the near future. + +Thread locks + +When a Run is in_progress and not in a terminal state, the Thread is locked. This means that: + +New Messages cannot be added to the Thread. +New Runs cannot be created on the Thread. +Run steps + +Run steps lifecycle - diagram showing possible status transitions + +Run step statuses have the same meaning as Run statuses. + +Most of the interesting detail in the Run Step object lives in the step_details field. There can be two types of step details: + +message_creation: This Run Step is created when the Assistant creates a Message on the Thread. +tool_calls: This Run Step is created when the Assistant calls a tool. Details around this are covered in the relevant sections of the Tools guide. +Data access guidance +Currently, assistants, threads, messages, and files created via the API are scoped to the entire organization. As such, any person with API key access to the organization is able to read or write assistants, threads, messages, and files in the organization. + +We strongly recommend the following data access controls: + +Implement authorization. Before performing reads or writes on assistants, threads, messages, and files, ensure that the end-user is authorized to do so. For example, store in your database the object IDs that the end-user has access to, and check it before fetching the object ID with the API. +Restrict API key access. Carefully consider who in your organization should have API keys and periodically audit this list. API keys enable a wide range of operations including reading and modifying sensitive information, such as messages and files. +Create separate accounts. Consider creating separate accounts / organizations for different applications in order to isolate data across multiple applications. +Limitations +During this beta, there are several known limitations we are looking to address in the coming weeks and months. We will publish a changelog on this page when we add support for additional functionality. + + + + + + + + + +## Tools Beta +Give Assistants access to OpenAI-hosted tools like Code Interpreter and Knowledge Retrieval, or build your own tools using Function calling. + +The Assistants API is in beta and we are actively working on adding more functionality. Share your feedback in our Developer Forum! +Code Interpreter +Code Interpreter allows the Assistants API to write and run Python code in a sandboxed execution environment. This tool can process files with diverse data and formatting, and generate files with data and images of graphs. Code Interpreter allows your Assistant to run code iteratively to solve challenging code and math problems. When your Assistant writes code that fails to run, it can iterate on this code by attempting to run different code until the code execution succeeds. + +Enabling Code Interpreter +Pass the code_interpreterin the tools parameter of the Assistant object to enable Code Interpreter: + +```python +assistant = client.beta.assistants.create( + instructions="You are a personal math tutor. When asked a math question, write and run code to answer the question.", + model="gpt-4-1106-preview", + tools=[{"type": "code_interpreter"}] +) +``` +The model then decides when to invoke Code Interpreter in a Run based on the nature of the user request. This behavior can be promoted by prompting in the Assistant's instructions (e.g., “write code to solve this problem”). + +Passing files to Code Interpreter +Code Interpreter can parse data from files. This is useful when you want to provide a large volume of data to the Assistant or allow your users to upload their own files for analysis. + +Files that are passed at the Assistant level are accessible by all Runs with this Assistant: + +```python +# Upload a file with an "assistants" purpose +file = client.files.create( + file=open("speech.py", "rb"), + purpose='assistants' +) + +# Create an assistant using the file ID +assistant = client.beta.assistants.create( + instructions="You are a personal math tutor. When asked a math question, write and run code to answer the question.", + model="gpt-4-1106-preview", + tools=[{"type": "code_interpreter"}], + file_ids=[file.id] +) +``` +Files can also be passed at the Thread level. These files are only accessible in the specific Thread. Upload the File using the File upload endpoint and then pass the File ID as part of the Message creation request: + +```python +thread = client.beta.threads.create( + messages=[ + { + "role": "user", + "content": "I need to solve the equation `3x + 11 = 14`. Can you help me?", + "file_ids": [file.id] + } + ] +) +``` +Files have a maximum size of 512 MB. Code Interpreter supports a variety of file formats including .csv, .pdf, .json and many more. More details on the file extensions (and their corresponding MIME-types) supported can be found in the Supported files section below. + +Reading images and files generated by Code Interpreter +Code Interpreter in the API also outputs files, such as generating image diagrams, CSVs, and PDFs. There are two types of files that are generated: + +Images +Data files (e.g. a csv file with data generated by the Assistant) +When Code Interpreter generates an image, you can look up and download this file in the file_id field of the Assistant Message response: + +```json +{ + "id": "msg_OHGpsFRGFYmz69MM1u8KYCwf", + "object": "thread.message", + "created_at": 1698964262, + "thread_id": "thread_uqorHcTs46BZhYMyPn6Mg5gW", + "role": "assistant", + "content": [ + { + "type": "image_file", + "image_file": { + "file_id": "file-WsgZPYWAauPuW4uvcgNUGcb" + } + } + ] + # ... +} +``` +The file content can then be downloaded by passing the file ID to the Files API: + +```python +content = client.files.retrieve_content(file.id) +``` +When Code Interpreter references a file path (e.g., ”Download this csv file”), file paths are listed as annotations. You can convert these annotations into links to download the file: + +```json +{ + "id": "msg_3jyIh3DgunZSNMCOORflDyih", + "object": "thread.message", + "created_at": 1699073585, + "thread_id": "thread_ZRvNTPOoYVGssUZr3G8cRRzE", + "role": "assistant", + "content": [ + { + "type": "text", + "text": { + "value": "The rows of the CSV file have been shuffled and saved to a new CSV file. You can download the shuffled CSV file from the following link:\n\n[Download Shuffled CSV File](sandbox:/mnt/data/shuffled_file.csv)", + "annotations": [ + { + "type": "file_path", + "text": "sandbox:/mnt/data/shuffled_file.csv", + "start_index": 167, + "end_index": 202, + "file_path": { + "file_id": "file-oSgJAzAnnQkVB3u7yCoE9CBe" + } + } + ... +``` +Input and output logs of Code Interpreter +By listing the steps of a Run that called Code Interpreter, you can inspect the code input and outputs logs of Code Interpreter: + +```python +run_steps = client.beta.threads.runs.steps.list( + thread_id=thread.id, + run_id=run.id +) +{ + "object": "list", + "data": [ + { + "id": "step_DQfPq3JPu8hRKW0ctAraWC9s", + "object": "assistant.run.step", + "type": "tool_calls", + "run_id": "run_kme4a442kme4a442", + "thread_id": "thread_34p0sfdas0823smfv", + "status": "completed", + "step_details": { + "type": "tool_calls", + "tool_calls": [ + { + "type": "code", + "code": { + "input": "# Calculating 2 + 2\nresult = 2 + 2\nresult", + "outputs": [ + { + "type": "logs", + "logs": "4" + } + ... + } +``` +Knowledge Retrieval +Retrieval augments the Assistant with knowledge from outside its model, such as proprietary product information or documents provided by your users. Once a file is uploaded and passed to the Assistant, OpenAI will automatically chunk your documents, index and store the embeddings, and implement vector search to retrieve relevant content to answer user queries. + +Enabling Retrieval +Pass the retrieval in the tools parameter of the Assistant to enable Retrieval: + +```python +assistant = client.beta.assistants.create( + instructions="You are a customer support chatbot. Use your knowledge base to best respond to customer queries.", + model="gpt-4-1106-preview", + tools=[{"type": "retrieval"}] +) +``` +How it works +The model then decides when to retrieve content based on the user Messages. The Assistants API automatically chooses between two retrieval techniques: + +it either passes the file content in the prompt for short documents, or +performs a vector search for longer documents +Retrieval currently optimizes for quality by adding all relevant content to the context of model calls. We plan to introduce other retrieval strategies to enable developers to choose a different tradeoff between retrieval quality and model usage cost. + +Uploading files for retrieval +Similar to Code Interpreter, files can be passed at the Assistant-level or at the Thread-level + +```python +# Upload a file with an "assistants" purpose +file = client.files.create( + file=open("knowledge.pdf", "rb"), + purpose='assistants' +) + +# Add the file to the assistant +assistant = client.beta.assistants.create( + instructions="You are a customer support chatbot. Use your knowledge base to best respond to customer queries.", + model="gpt-4-1106-preview", + tools=[{"type": "retrieval"}], + file_ids=[file.id] +) +``` +Files can also be added to a Message in a Thread. These files are only accessible within this specific thread. After having uploaded a file, you can pass the ID of this File when creating the Message: + +```python +message = client.beta.threads.messages.create( + thread_id=thread.id, + role="user", + content="I can't find in the PDF manual how to turn off this device.", + file_ids=[file.id] +) +``` +Maximum file size is 512MB. Retrieval supports a variety of file formats including .pdf, .md, .docx and many more. More details on the file extensions (and their corresponding MIME-types) supported can be found in the Supported files section below. + +Deleting files +To remove a file from the assistant, you can detach the file from the assistant: + +```python +file_deletion_status = client.beta.assistants.files.delete( + assistant_id=assistant.id, + file_id=file.id +) +``` +Detaching the file from the assistant removes the file from the retrieval index as well. + +File citations +When Code Interpreter outputs file paths in a Message, you can convert them to corresponding file downloads using the annotations field. See the Annotations section for an example of how to do this. + +```json +{ + "id": "msg_3jyIh3DgunZSNMCOORflDyih", + "object": "thread.message", + "created_at": 1699073585, + "thread_id": "thread_ZRvNTPOoYVGssUZr3G8cRRzE", + "role": "assistant", + "content": [ + { + "type": "text", + "text": { + "value": "The rows of the CSV file have been shuffled and saved to a new CSV file. You can download the shuffled CSV file from the following link:\n\n[Download Shuffled CSV File](sandbox:/mnt/data/shuffled_file.csv)", + "annotations": [ + { + "type": "file_path", + "text": "sandbox:/mnt/data/shuffled_file.csv", + "start_index": 167, + "end_index": 202, + "file_path": { + "file_id": "file-oSgJAzAnnQkVB3u7yCoE9CBe" + } + } + ] + } + } + ], + "file_ids": [ + "file-oSgJAzAnnQkVB3u7yCoE9CBe" + ], + ... + }, +``` +Function calling +Similar to the Chat Completions API, the Assistants API supports function calling. Function calling allows you to describe functions to the Assistants and have it intelligently return the functions that need to be called along with their arguments. The Assistants API will pause execution during a Run when it invokes functions, and you can supply the results of the function call back to continue the Run execution. + +Defining functions +First, define your functions when creating an Assistant: + +```python +assistant = client.beta.assistants.create( + instructions="You are a weather bot. Use the provided functions to answer questions.", + model="gpt-4-1106-preview", + tools=[{ + "type": "function", + "function": { + "name": "getCurrentWeather", + "description": "Get the weather in location", + "parameters": { + "type": "object", + "properties": { + "location": {"type": "string", "description": "The city and state e.g. San Francisco, CA"}, + "unit": {"type": "string", "enum": ["c", "f"]} + }, + "required": ["location"] + } + } + }, { + "type": "function", + "function": { + "name": "getNickname", + "description": "Get the nickname of a city", + "parameters": { + "type": "object", + "properties": { + "location": {"type": "string", "description": "The city and state e.g. San Francisco, CA"}, + }, + "required": ["location"] + } + } + }] +) +``` +Reading the functions called by the Assistant +When you initiate a Run with a user Message that triggers the function, the Run will enter a requires_action status. The model can provide multiple functions to call at once via the parallel function calling feature: + +```json +{ + "id": "run_3HV7rrQsagiqZmYynKwEdcxS", + "object": "thread.run", + "assistant_id": "asst_rEEOF3OGMan2ChvEALwTQakP", + "thread_id": "thread_dXgWKGf8Cb7md8p0wKiMDGKc", + "status": "requires_action", + "required_action": { + "type": "submit_tool_outputs", + "submit_tool_outputs": { + "tool_calls": [ + { + "tool_call_id": "call_Vt5AqcWr8QsRTNGv4cDIpsmA", + "type": "function", + "function": { + "name": "getCurrentWeather", + "arguments": "{\"location\":\"San Francisco\"}" + } + }, + { + "tool_call_id": "call_45y0df8230430n34f8saa", + "type": "function", + "function": { + "name": "getNickname", + "arguments": "{\"location\":\"Los Angeles\"}" + } + } + ] + } + }, +... +``` +Submitting functions outputs +You can then complete the Run by submitting the output from the function(s) you call. Pass the tool_call_id referenced in the required_action object above to match output to each function call. + +```python +run = client.beta.threads.runs.submit_tool_outputs( + thread_id=thread.id, + run_id=run.id, + tool_outputs=[ + { + "tool_call_id": call_ids[0], + "output": "22C", + }, + { + "tool_call_id": call_ids[1], + "output": "LA", + }, + ] +) +``` +After submitting outputs, the run will enter the queued state before it continues it’s execution. + +Supported files +For text/ MIME types, the encoding must be one of utf-8, utf-16, or ascii. + +FILE FORMAT MIME TYPE CODE INTERPRETER RETRIEVAL +.c text/x-c +.cpp text/x-c++ +.csv application/csv +.docx application/vnd.openxmlformats-officedocument.wordprocessingml.document +.html text/html +.java text/x-java +.json application/json +.md text/markdown +.pdf application/pdf +.php text/x-php +.pptx application/vnd.openxmlformats-officedocument.presentationml.presentation +.py text/x-python +.py text/x-script.python +.rb text/x-ruby +.tex text/x-tex +.txt text/plain +.css text/css +.jpeg image/jpeg +.jpg image/jpeg +.js text/javascript +.gif image/gif +.png image/png +.tar application/x-tar +.ts application/typescript +.xlsx application/vnd.openxmlformats-officedocument.spreadsheetml.sheet +.xml application/xml or "text/xml" +.zip application/zip +Was this page useful? From 6dcc649635a6fb0595915ffeead7997a5de1a16b Mon Sep 17 00:00:00 2001 From: David Shapiro Date: Wed, 8 Nov 2023 06:59:56 -0500 Subject: [PATCH 005/141] adding contributing --- contributing.md | 7 +++++++ 1 file changed, 7 insertions(+) create mode 100644 contributing.md diff --git a/contributing.md b/contributing.md new file mode 100644 index 0000000..ec937dc --- /dev/null +++ b/contributing.md @@ -0,0 +1,7 @@ +# Contributing + +1. Start with Discussions: Talk about what you're doing, ask questions. +2. Then move to Issues: Once an issue has been discussed, create a formal issue. +3. Finally create a PR: Once an issue has been clarified, submit a PR. + +Unsolicited and undiscussed PRs will likely be rejected \ No newline at end of file From 17bbb3f86a1f6b03f3375ebec335d3748a1728e7 Mon Sep 17 00:00:00 2001 From: guillermo-delrio Date: Thu, 9 Nov 2023 09:40:29 +0100 Subject: [PATCH 006/141] Add successful Autonomous Swarm Agent Builder system prompt --- agent/definition.md | 32 ++++++++++++++++++++++++++++++++ 1 file changed, 32 insertions(+) create mode 100644 agent/definition.md diff --git a/agent/definition.md b/agent/definition.md new file mode 100644 index 0000000..8afa539 --- /dev/null +++ b/agent/definition.md @@ -0,0 +1,32 @@ +# Name +Autonomous Swarm Agent Builder + +# Instructions +``` +# Mission +- Help users create autonomous OpenAI agents for the HAAS system. + +# Background info +Two files have been provided for useful context: +1) README.md: contains information regarding HAAS. +2) OpenAI_Documentation.md: contains information regarding OpenAI agents. + +# Rules +- Always check the provided files to ground your thoughts. +- If a term can have multiple meanings, always prefer those mentioned in the provided documents. + +# Instructions +- Check terms provided by the user against the provided documents. +- Think step by step to build an appropriate response. +``` + +# Model +gpt-4-1106-preview + +# Tools +- Code interpreter +- Retrieval + +# Files +- READE.md +- OpenAI_Documentation.md \ No newline at end of file From eade8dc653e9a47f34e5463d25c3571f71ff00a8 Mon Sep 17 00:00:00 2001 From: guillermo-delrio Date: Thu, 9 Nov 2023 11:28:11 +0100 Subject: [PATCH 007/141] Add Python script to create agents --- .gitignore | 3 +- agent-builder/README.md | 13 + .../files/OpenAI_Documentation.md | 1080 +++++++++++++++++ .../files/README.md | 128 ++ .../instructions.md | 15 + agent-builder/create.py | 48 + 6 files changed, 1286 insertions(+), 1 deletion(-) create mode 100644 agent-builder/README.md create mode 100644 agent-builder/agents/Autonomous Swarm Agent Builder/files/OpenAI_Documentation.md create mode 100644 agent-builder/agents/Autonomous Swarm Agent Builder/files/README.md create mode 100644 agent-builder/agents/Autonomous Swarm Agent Builder/instructions.md create mode 100644 agent-builder/create.py diff --git a/.gitignore b/.gitignore index 60c16c3..4c7bc59 100644 --- a/.gitignore +++ b/.gitignore @@ -1 +1,2 @@ -key_openai.txt \ No newline at end of file +key_openai.txt +/**/*/.env \ No newline at end of file diff --git a/agent-builder/README.md b/agent-builder/README.md new file mode 100644 index 0000000..21907b2 --- /dev/null +++ b/agent-builder/README.md @@ -0,0 +1,13 @@ +# Agent Builder + +## Description +This script is designed to create assistants using OpenAI's API based on a predefined folder structure. For each agent: +- Create a folder with its name, eg.: Autonomous Swarm Agent Builder +- Create a "instructions.md" file with the custom instructions for that agent. +- If you want to provide files for RAG, create a files folder and place them there. + +## Requirements +Python 3.x +Packages: +- openai +- python-dotenv \ No newline at end of file diff --git a/agent-builder/agents/Autonomous Swarm Agent Builder/files/OpenAI_Documentation.md b/agent-builder/agents/Autonomous Swarm Agent Builder/files/OpenAI_Documentation.md new file mode 100644 index 0000000..e873bc6 --- /dev/null +++ b/agent-builder/agents/Autonomous Swarm Agent Builder/files/OpenAI_Documentation.md @@ -0,0 +1,1080 @@ +## Function calling +Learn how to connect large language models to external tools. + +Introduction +In an API call, you can describe functions and have the model intelligently choose to output a JSON object containing arguments to call one or many functions. The Chat Completions API does not call the function; instead, the model generates JSON that you can use to call the function in your code. + +The latest models (gpt-3.5-turbo-1006 and gpt-4-1106-preview) have been trained to both detect when a function should to be called (depending on the input) and to respond with JSON that adheres to the function signature more closely than previous models. With this capability also comes potential risks. We strongly recommend building in user confirmation flows before taking actions that impact the world on behalf of users (sending an email, posting something online, making a purchase, etc). + +This guide is focused on function calling with the Chat Completions API, for details on function calling in the Assistants API, please see the Assistants Tools page. +Common use cases +Function calling allows you to more reliably get structured data back from the model. For example, you can: + +Create assistants that answer questions by calling external APIs (e.g. like ChatGPT Plugins) +e.g. define functions like send_email(to: string, body: string), or get_current_weather(location: string, unit: 'celsius' | 'fahrenheit') +Convert natural language into API calls +e.g. convert "Who are my top customers?" to get_customers(min_revenue: int, created_before: string, limit: int) and call your internal API +Extract structured data from text +e.g. define a function called extract_data(name: string, birthday: string), or sql_query(query: string) +...and much more! + +The basic sequence of steps for function calling is as follows: + +Call the model with the user query and a set of functions defined in the functions parameter. +The model can choose to call one or more functions; if so, the content will be a stringified JSON object adhering to your custom schema (note: the model may hallucinate parameters). +Parse the string into JSON in your code, and call your function with the provided arguments if they exist. +Call the model again by appending the function response as a new message, and let the model summarize the results back to the user. +Supported models +Not all model versions are trained with function calling data. Function calling is supported with the following models: + +gpt-4 +gpt-4-1106-preview +gpt-4-0613 +gpt-3.5-turbo +gpt-3.5-turbo-1106 +gpt-3.5-turbo-0613 +In addition, parallel function calls is supported on the following models: + +gpt-4-1106-preview +gpt-3.5-turbo-1106 +Parallel function calling +Parallel function call is helpful for cases where you want to call multiple functions in one turn. For example, you may want to call functions to get the weather in 3 different locations at the same time. In this case, the model will call multiple functions in a single response. And you can pass back the results of each function call by referencing the tool_call_id in the response matching the ID of each tool call. + +In this example, we define a single function get_current_weather. The model calls the function multiple times, and after sending the function response back to the model, we let it decide the next step. It responded with a user-facing message which was telling the user the temperature in Boston, San Francisco, and Tokyo. Depending on the query, it may choose to call a function again. + +If you want to force the model to call a specific function you can do so by setting tool_choice with a specific function name. You can also force the model to generate a user-facing message by setting tool_choice: "none". Note that the default behavior (tool_choice: "auto") is for the model to decide on its own whether to call a function and if so which function to call. + +Example with one function called in parallel + +```python +python +import openai +import json + +# Example dummy function hard coded to return the same weather +# In production, this could be your backend API or an external API +def get_current_weather(location, unit="fahrenheit"): + """Get the current weather in a given location""" + if "tokyo" in location.lower(): + return json.dumps({"location": location, "temperature": "10", "unit": "celsius"}) + elif "san francisco" in location.lower(): + return json.dumps({"location": location, "temperature": "72", "unit": "fahrenheit"}) + else: + return json.dumps({"location": location, "temperature": "22", "unit": "celsius"}) + +def run_conversation(): + # Step 1: send the conversation and available functions to the model + messages = [{"role": "user", "content": "What's the weather like in San Francisco, Tokyo, and Paris?"}] + tools = [ + { + "type": "function", + "function": { + "name": "get_current_weather", + "description": "Get the current weather in a given location", + "parameters": { + "type": "object", + "properties": { + "location": { + "type": "string", + "description": "The city and state, e.g. San Francisco, CA", + }, + "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}, + }, + "required": ["location"], + }, + }, + } + ] + response = openai.chat.completions.create( + model="gpt-3.5-turbo-1106", + messages=messages, + tools=tools, + tool_choice="auto", # auto is default, but we'll be explicit + ) + response_message = response.choices[0].message + tool_calls = response_message.tool_calls + # Step 2: check if the model wanted to call a function + if tool_calls: + # Step 3: call the function + # Note: the JSON response may not always be valid; be sure to handle errors + available_functions = { + "get_current_weather": get_current_weather, + } # only one function in this example, but you can have multiple + messages.append(response_message) # extend conversation with assistant's reply + # Step 4: send the info for each function call and function response to the model + for tool_call in tool_calls: + function_name = tool_call.function.name + function_to_call = available_functions[function_name] + function_args = json.loads(tool_call.function.arguments) + function_response = function_to_call( + location=function_args.get("location"), + unit=function_args.get("unit"), + ) + messages.append( + { + "tool_call_id": tool_call.id, + "role": "tool", + "name": function_name, + "content": function_response, + } + ) # extend conversation with function response + second_response = openai.chat.completions.create( + model="gpt-3.5-turbo-1106", + messages=messages, + ) # get a new response from the model where it can see the function response + return second_response +print(run_conversation()) +``` +You can find more examples of function calling in the OpenAI cookbook: +Function calling +Learn from more examples demonstrating function calling +Tokens +Under the hood, functions are injected into the system message in a syntax the model has been trained on. This means functions count against the model's context limit and are billed as input tokens. If running into context limits, we suggest limiting the number of functions or the length of documentation you provide for function parameters. + +It is also possible to use fine-tuning to reduce the number of tokens used if you have many functions defined. + + + + + + + + + +##Text generation models +New capabilities launched at DevDay + +Text generation models are now capable of JSON mode and Reproducible outputs. We also launched the Assistants API to enable you to build agent-like experiences on top of our text-generation models. +OpenAI's text generation models (often called generative pre-trained transformers or large language models) have been trained to understand natural language, code, and images. The models provide text outputs in response to their inputs. The inputs to these models are also referred to as "prompts". Designing a prompt is essentially how you “program” a large language model model, usually by providing instructions or some examples of how to successfully complete a task. + +Using OpenAI's text generation models, you can build applications to: + +Draft documents +Write computer code +Answer questions about a knowledge base +Analyze texts +Give software a natural language interface +Tutor in a range of subjects +Translate languages +Simulate characters for games +With the release of gpt-4-vision-preview, you can now build systems that also process and understand images. + +Explore GPT-4 with image inputs +Check out the vision guide for more detail. +To use one of these models via the OpenAI API, you’ll send a request containing the inputs and your API key, and receive a response containing the model’s output. Our latest models, gpt-4 and gpt-3.5-turbo, are accessed through the chat completions API endpoint. + +MODEL FAMILIES API ENDPOINT +Newer models (2023–) gpt-4, gpt-3.5-turbo https://api.openai.com/v1/chat/completions +Updated base models (2023) babbage-002, davinci-002 https://api.openai.com/v1/completions +Legacy models (2020–2022) text-davinci-003, text-davinci-002, davinci, curie, babbage, ada https://api.openai.com/v1/completions +You can experiment with various models in the chat playground. If you’re not sure which model to use, then use gpt-3.5-turbo or gpt-4. + +Chat Completions API +Chat models take a list of messages as input and return a model-generated message as output. Although the chat format is designed to make multi-turn conversations easy, it’s just as useful for single-turn tasks without any conversation. + +An example Chat Completions API call looks like the following: + +```python +from openai import OpenAI +client = OpenAI() + +response = client.chat.completions.create( + model="gpt-3.5-turbo", + messages=[ + {"role": "system", "content": "You are a helpful assistant."}, + {"role": "user", "content": "Who won the world series in 2020?"}, + {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."}, + {"role": "user", "content": "Where was it played?"} + ] +) +``` +To learn more, you can view the full API reference documentation for the Chat API. + +The main input is the messages parameter. Messages must be an array of message objects, where each object has a role (either "system", "user", or "assistant") and content. Conversations can be as short as one message or many back and forth turns. + +Typically, a conversation is formatted with a system message first, followed by alternating user and assistant messages. + +The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. However note that the system message is optional and the model’s behavior without a system message is likely to be similar to using a generic message such as "You are a helpful assistant." + +The user messages provide requests or comments for the assistant to respond to. Assistant messages store previous assistant responses, but can also be written by you to give examples of desired behavior. + +Including conversation history is important when user instructions refer to prior messages. In the example above, the user’s final question of "Where was it played?" only makes sense in the context of the prior messages about the World Series of 2020. Because the models have no memory of past requests, all relevant information must be supplied as part of the conversation history in each request. If a conversation cannot fit within the model’s token limit, it will need to be shortened in some way. + +To mimic the effect seen in ChatGPT where the text is returned iteratively, set the stream parameter to true. +Chat Completions response format +An example Chat Completions API response looks as follows: + +```json +{ + "choices": [ + { + "finish_reason": "stop", + "index": 0, + "message": { + "content": "The 2020 World Series was played in Texas at Globe Life Field in Arlington.", + "role": "assistant" + } + } + ], + "created": 1677664795, + "id": "chatcmpl-7QyqpwdfhqwajicIEznoc6Q47XAyW", + "model": "gpt-3.5-turbo-0613", + "object": "chat.completion", + "usage": { + "completion_tokens": 17, + "prompt_tokens": 57, + "total_tokens": 74 + } +} +``` +The assistant’s reply can be extracted with: + +```python +response['choices'][0]['message']['content'] +``` + +Every response will include a finish_reason. The possible values for finish_reason are: + +stop: API returned complete message, or a message terminated by one of the stop sequences provided via the stop parameter +length: Incomplete model output due to max_tokens parameter or token limit +function_call: The model decided to call a function +content_filter: Omitted content due to a flag from our content filters +null: API response still in progress or incomplete +Depending on input parameters, the model response may include different information. + +JSON mode New +A common way to use Chat Completions is to instruct the model to always return JSON in some format that makes sense for your use case, by providing a system message. This works well, but occasionally the models may generate output that does not parse to valid JSON. + +To prevent these errors and improve model performance, when calling gpt-4-1106-preview or gpt-3.5-turbo-1106, you can set response_format to { type: "json_object" } to enable JSON mode. When JSON mode is enabled, the model is constrained to only generate strings that parse into valid JSON. + +Important notes: + +When using JSON mode, always instruct the model to produce JSON via some message in the conversation, for example via your system message. If you don't include an explicit instruction to generate JSON, the model may generate an unending stream of whitespace and the request may run continually until it reaches the token limit. To help ensure you don't forget, the API will throw an error if the string "JSON" does not appear somewhere in the context. +The JSON in the message the model returns may be partial (i.e. cut off) if finish_reason is length, which indicates the generation exceeded max_tokens or the conversation exceeded the token limit. To guard against this, check finish_reason before parsing the response. +JSON mode will not guarantee the output matches any specific schema, only that it is valid and parses without errors. +Note that JSON mode is always enabled when the model is generating arguments as part of function calling. + +Reproducible outputs Beta +Chat Completions are non-deterministic by default (which means model outputs may differ from request to request). That being said, we offer some control towards deterministic outputs by giving you access to the seed parameter and the system_fingerprint response field. + +To receive (mostly) deterministic outputs across API calls, you can: + +Set the seed parameter to any integer of your choice and use the same value across requests you'd like deterministic outputs for. +Ensure all other parameters (like prompt or temperature) are the exact same across requests. +Sometimes, determinism may be impacted due to necessary changes OpenAI makes to model configurations on our end. To help you keep track of these changes, we expose the system_fingerprint field. If this value is different, you may see different outputs due to changes we've made on our systems. + +Deterministic outputs +Explore the new seed parameter in the OpenAI cookbook +Managing tokens +Language models read and write text in chunks called tokens. In English, a token can be as short as one character or as long as one word (e.g., a or apple), and in some languages tokens can be even shorter than one character or even longer than one word. + +For example, the string "ChatGPT is great!" is encoded into six tokens: ["Chat", "G", "PT", " is", " great", "!"]. + +The total number of tokens in an API call affects: + +How much your API call costs, as you pay per token +How long your API call takes, as writing more tokens takes more time +Whether your API call works at all, as total tokens must be below the model’s maximum limit (4097 tokens for gpt-3.5-turbo) +Both input and output tokens count toward these quantities. For example, if your API call used 10 tokens in the message input and you received 20 tokens in the message output, you would be billed for 30 tokens. Note however that for some models the price per token is different for tokens in the input vs. the output (see the pricing page for more information). + +To see how many tokens are used by an API call, check the usage field in the API response (e.g., response['usage']['total_tokens']). + +Chat models like gpt-3.5-turbo and gpt-4 use tokens in the same way as the models available in the completions API, but because of their message-based formatting, it's more difficult to count how many tokens will be used by a conversation. + +DEEP DIVE +Counting tokens for chat API calls +To see how many tokens are in a text string without making an API call, use OpenAI’s tiktoken Python library. Example code can be found in the OpenAI Cookbook’s guide on how to count tokens with tiktoken. + +Each message passed to the API consumes the number of tokens in the content, role, and other fields, plus a few extra for behind-the-scenes formatting. This may change slightly in the future. + +If a conversation has too many tokens to fit within a model’s maximum limit (e.g., more than 4097 tokens for gpt-3.5-turbo), you will have to truncate, omit, or otherwise shrink your text until it fits. Beware that if a message is removed from the messages input, the model will lose all knowledge of it. + +Note that very long conversations are more likely to receive incomplete replies. For example, a gpt-3.5-turbo conversation that is 4090 tokens long will have its reply cut off after just 6 tokens. + +Parameter details +Frequency and presence penalties + +The frequency and presence penalties found in the Chat Completions API and Legacy Completions API can be used to reduce the likelihood of sampling repetitive sequences of tokens. They work by directly modifying the logits (un-normalized log-probabilities) with an additive contribution. + +``` +mu[j] -> mu[j] - c[j] * alpha_frequency - float(c[j] > 0) * alpha_presence +``` + +Where: + +mu[j] is the logits of the j-th token +c[j] is how often that token was sampled prior to the current position +float(c[j] > 0) is 1 if c[j] > 0 and 0 otherwise +alpha_frequency is the frequency penalty coefficient +alpha_presence is the presence penalty coefficient +As we can see, the presence penalty is a one-off additive contribution that applies to all tokens that have been sampled at least once and the frequency penalty is a contribution that is proportional to how often a particular token has already been sampled. + +Reasonable values for the penalty coefficients are around 0.1 to 1 if the aim is to just reduce repetitive samples somewhat. If the aim is to strongly suppress repetition, then one can increase the coefficients up to 2, but this can noticeably degrade the quality of samples. Negative values can be used to increase the likelihood of repetition. + +Completions API Legacy +The completions API endpoint received its final update in July 2023 and has a different interface than the new chat completions endpoint. Instead of the input being a list of messages, the input is a freeform text string called a prompt. + +An example API call looks as follows: + +```python +from openai import OpenAI +client = OpenAI() + +response = client.completions.create( + model="gpt-3.5-turbo-instruct", + prompt="Write a tagline for an ice cream shop." +) +``` +See the full API reference documentation to learn more. + +Token log probabilities +The completions API can provide a limited number of log probabilities associated with the most likely tokens for each output token. This feature is controlled by using the logprobs field. This can be useful in some cases to assess the confidence of the model in its output. + +Inserting text +The completions endpoint also supports inserting text by providing a suffix in addition to the standard prompt which is treated as a prefix. This need naturally arises when writing long-form text, transitioning between paragraphs, following an outline, or guiding the model towards an ending. This also works on code, and can be used to insert in the middle of a function or file. + +DEEP DIVE +Inserting text +Completions response format +An example completions API response looks as follows: + +```json +{ + "choices": [ + { + "finish_reason": "length", + "index": 0, + "logprobs": null, + "text": "\n\n\"Let Your Sweet Tooth Run Wild at Our Creamy Ice Cream Shack" + } + ], + "created": 1683130927, + "id": "cmpl-7C9Wxi9Du4j1lQjdjhxBlO22M61LD", + "model": "gpt-3.5-turbo-instruct", + "object": "text_completion", + "usage": { + "completion_tokens": 16, + "prompt_tokens": 10, + "total_tokens": 26 + } +} +``` +In Python, the output can be extracted with response['choices'][0]['text']. + +The response format is similar to the response format of the Chat Completions API but also includes the optional field logprobs. + +Chat Completions vs. Completions +The Chat Completions format can be made similar to the completions format by constructing a request using a single user message. For example, one can translate from English to French with the following completions prompt: + +``` +Translate the following English text to French: "{text}" +``` + +And an equivalent chat prompt would be: + +``` +[{"role": "user", "content": 'Translate the following English text to French: "{text}"'}] +``` + +Likewise, the completions API can be used to simulate a chat between a user and an assistant by formatting the input accordingly. + +The difference between these APIs is the underlying models that are available in each. The chat completions API is the interface to our most capable model (gpt-4), and our most cost effective model (gpt-3.5-turbo). + +Which model should I use? +We generally recommend that you use either gpt-4 or gpt-3.5-turbo. Which of these you should use depends on the complexity of the tasks you are using the models for. gpt-4 generally performs better on a wide range of evaluations. In particular, gpt-4 is more capable at carefully following complex instructions. By contrast gpt-3.5-turbo is more likely to follow just one part of a complex multi-part instruction. gpt-4 is less likely than gpt-3.5-turbo to make up information, a behavior known as "hallucination". gpt-4 also has a larger context window with a maximum size of 8,192 tokens compared to 4,096 tokens for gpt-3.5-turbo. However, gpt-3.5-turbo returns outputs with lower latency and costs much less per token. + +We recommend experimenting in the playground to investigate which models provide the best price performance trade-off for your usage. A common design pattern is to use several distinct query types which are each dispatched to the model appropriate to handle them. + +Prompt engineering +An awareness of the best practices for working with OpenAI models can make a significant difference in application performance. The failure modes that each exhibit and the ways of working around or correcting those failure modes are not always intuitive. There is an entire field related to working with language models which has come to be known as "prompt engineering", but as the field has progressed its scope has outgrown merely engineering the prompt into engineering systems that use model queries as components. To learn more, read our guide on prompt engineering which covers methods to improve model reasoning, reduce the likelihood of model hallucinations, and more. You can also find many useful resources including code samples in the OpenAI Cookbook. + +FAQ +How should I set the temperature parameter? +Lower values for temperature result in more consistent outputs, while higher values generate more diverse and creative results. Select a temperature value based on the desired trade-off between coherence and creativity for your specific application. + +Is fine-tuning available for the latest models? +Yes, for some. Currently, you can only fine-tune gpt-3.5-turbo and our updated base models (babbage-002 and davinci-002). See the fine-tuning guide for more details on how to use fine-tuned models. + +Do you store the data that is passed into the API? +As of March 1st, 2023, we retain your API data for 30 days but no longer use your data sent via the API to improve our models. Learn more in our data usage policy. Some endpoints offer zero retention. + +How can I make my application more safe? +If you want to add a moderation layer to the outputs of the Chat API, you can follow our moderation guide to prevent content that violates OpenAI’s usage policies from being shown. + +Should I use ChatGPT or the API? +ChatGPT offers a chat interface to the models in the OpenAI API and a range of built-in features such as integrated browsing, code execution, plugins, and more. By contrast, using OpenAI’s API provides more flexibility. + + + + + + + + + + + + +## Assistants API Beta +The Assistants API allows you to build AI assistants within your own applications. An Assistant has instructions and can leverage models, tools, and knowledge to respond to user queries. The Assistants API currently supports three types of tools: Code Interpreter, Retrieval, and Function calling. In the future, we plan to release more OpenAI-built tools, and allow you to provide your own tools on our platform. + +You can explore the capabilties of the Assitants API using the Assistants playground or by building a step-by-step integration outlined in this guide. At a high level, a typical integration of the Assistants API has the following flow: + +Create an Assistant in the API by defining it custom instructions and picking a model. If helpful, enable tools like Code Interpreter, Retrieval, and Function calling. +Create a Thread when a user starts a conversation. +Add Messages to the Thread as the user ask questions. +Run the Assistant on the Thread to trigger responses. This automatically calls the relevant tools. +The Assistants API is in beta and we are actively working on adding more functionality. Share your feedback in our Developer Forum! +This starter guide walks through the key steps to create and run an Assistant that uses Code Interpreter. + +Step 1: Create an Assistant +An Assistant represents an entity that can be configured to respond to users’ Messages using several parameters like: + +Instructions: how the Assistant and model should behave or respond +Model: you can specify any GPT-3.5 or GPT-4 models, including fine-tuned models. The Retrieval tool requires gpt-3.5-turbo-1106 and gpt-4-1106-preview models. +Tools: the API supports Code Interpreter and Retrieval that are built and hosted by OpenAI. +Functions: the API allows you to define custom function signatures, with similar behavior as our function calling feature. +In this example, we're creating an Assistant that is a personal math tutor, with the Code Interpreter tool enabled: + +Calls to the Assistants API require that you pass a beta HTTP header. This is handled automatically if you’re using OpenAI’s official Python and Node.js SDKs. +OpenAI-Beta: assistants=v1 + + +```python +assistant = client.beta.assistants.create( + name="Math Tutor", + instructions="You are a personal math tutor. Write and run code to answer math questions.", + tools=[{"type": "code_interpreter"}], + model="gpt-4-1106-preview" +) +Step 2: Create a Thread +A Thread represents a conversation. We recommend creating one Thread per user as soon as the user initiates the conversation. Pass any user-specific context and files in this thread by creating Messages. +``` + + +thread = client.beta.threads.create() +Threads don’t have a size limit. You can pass as many Messages as you want to a Thread. The API will ensure that requests to the model fit within the maximum context window, using relevant optimization techniques such as truncation. + +Step 3: Add a Message to a Thread +A Message contains the user's text, and optionally, any files that the user uploads. Image files aren't supported today, but we plan to add support for them in the coming months. + +```python +message = client.beta.threads.messages.create( + thread_id=thread.id, + role="user", + content="I need to solve the equation `3x + 11 = 14`. Can you help me?" +) +``` +Now if you list Messages in Thread, you will see that this message is added to the thread on creation: + +```json +{ + "object": "list", + "data": [ + { + "created_at": 1696995451, + "id": "msg_4rb1Skx3XgQZEe4PHVRFQhr0", + "object": "thread.message", + "thread_id": "thread_34p0sfdas0823smfv", + "role": "user", + "content": [{ + "type": "text", + "text": { + "value": "I need to solve the equation `3x + 11 = 14`. Can you help me?", + "annotations": [] + } + }], + ... +``` +Step 4: Run the Assistant +For the Assistant to respond to the user message, you need to create a Run. This makes the Assistant read the Thread and decide whether to call tools or simply use the model to best answer the user query. As the run progresses, the assistant appends Messages to the thread with the role="assistant" . + +You can optionally pass additional instructions to the Assistant while creating the Run: + + + +```python +run = client.beta.threads.runs.create( + thread_id=thread.id, + assistant_id=assistant.id, + instructions="Please address the user as Jane Doe. The user has a premium account." +) +``` +Step 5: Display the Assistant's Response +This creates a Run in a queued status. You can periodically retrieve the Run to check on its status to see if it has moved to completed. + + +```python +run = client.beta.threads.runs.retrieve( + thread_id=thread.id, + run_id=run.id +) +``` +Once the Run completes, you can retrieve the Messages added by the Assistant to the Thread. + + +```python +messages = client.beta.threads.messages.list( + thread_id=thread.id +) +``` +And finally, display them to the user! During this Run, the Assistant added two new Messages to the Thread. + +ROLE CONTENT +user I need to solve the equation 3x + 11 = 14. Can you help me? +assistant Certainly, Jane Doe. To solve the equation (3x + 11 = 14) for (x), you'll want to isolate (x) on one side of the equation. Here's how you can do that: +Subtract 11 from both sides of the equation to get (3x = 3). +Then, divide both sides by 3 to solve for (x). +Let me calculate the value of (x) for you. +assistant The solution to the equation (3x + 11 = 14) is (x = 1). +You can also retrieve the Run Steps of this Run if you'd like to explore or display the inner workings of the Assistant and its tools. + + +## How Assistants work Beta +The Assistants API is designed to help developers build powerful AI assistants capable of performing a variety of tasks. + +The Assistants API is in beta and we are actively working on adding more functionality. Share your feedback in our Developer Forum! +Assistants can call OpenAI’s models with specific instructions to tune their personality and capabilities. +Assistants can access multiple tools in parallel. These can be both OpenAI-hosted tools — like Code interpreter and Knowledge retrieval — or tools you build / host (via Function calling). +Assistants can access persistent Threads. Threads simplify AI application development by storing message history and truncating it when the conversation gets too long for the model’s context length. You create a Thread once, and simply append Messages to it as your users reply. +Assistants can access Files in several formats — either as part of their creation or as part of Threads between Assistants and users. When using tools, Assistants can also create files (e.g., images, spreadsheets, etc) and cite files they reference in the Messages they create. +Objects +Assistants object architecture diagram + +OBJECT WHAT IT REPRESENTS +Assistant Purpose-built AI that uses OpenAI’s models and calls tools +Thread A conversation session between an Assistant and a user. Threads store Messages and automatically handle truncation to fit content into a model’s context. +Message A message created by an Assistant or a user. Messages can include text, images, and other files. Messages stored as a list on the Thread. +Run An invocation of an Assistant on a Thread. The Assistant uses it’s configuration and the Thread’s Messages to perform tasks by calling models and tools. As part of a Run, the Assistant appends Messages to the Thread. +Run Step A detailed list of steps the Assistant took as part of a Run. An Assistant can call tools or create Messages during it’s run. Examining Run Steps allows you to introspect how the Assistant is getting to it’s final results. +Creating Assistants +We recommend using OpenAI’s latest models with the Assistants API for best results and maximum compatibility with tools. +To get started, creating an Assistant only requires specifying the model to use. But you can further customize the behavior of the Assistant: + +Use the instructions parameter to guide the personality of the Assistant and define it’s goals. Instructions are similar to system messages in the Chat Completions API. +Use the tools parameter to give the Assistant access to up to 128 tools. You can give it access to OpenAI-hosted tools like code_interpreter and retrieval, or call a third-party tools via a function calling. +Use the file_ids parameter to give the tools like code_interpreter and retrieval access to files. Files are uploaded using the File upload endpoint and must have the purpose set to assistants to be used with this API. +For example, to create an Assistant that can create data visualization based on a .csv file, first upload a file. + +```python +file = client.files.create( + file=open("speech.py", "rb"), + purpose='assistants' +) +``` +And then create the Assistant with the uploaded file. + +```python +assistant = client.beta.assistants.create( + name="Data visualizer", + description="You are great at creating beautiful data visualizations. You analyze data present in .csv files, understand trends, and come up with data visualizations relevant to those trends. You also share a brief text summary of the trends observed.", + model="gpt-4-1106-preview", + tools=[{"type": "code_interpreter"}], + file_ids=[file.id] +) +``` +You can attach a maximum of 20 files per Assistant, and they can be at most 512 MB each. In addition, the size of all the files uploaded by your organization should not exceed 100GB. You can request an increase in this storage limit using our help center. + +You can also use the AssistantFile object to create, delete, or view associations between Assistant and File objects. Note that deleting an AssistantFile doesn’t delete the original File object, it simply deletes the association between that File and the Assistant. To delete a File, use the File delete endpoint instead. + +Managing Threads and Messages +Threads and Messages represent a conversation session between an Assistant and a user. There is no limit to the number of Messages you can store in a Thread. Once the size of the Messages exceeds the context window of the model, the Thread smartly truncates them to fit. You can create a Thread with an initial list of Messages like this: + + +```python +thread = client.beta.threads.create( + messages=[ + { + "role": "user", + "content": "Create 3 data visualizations based on the trends in this file.", + "file_ids": [file.id] + } + ] +) +``` +Messages can contain text, images, or files. At the moment, user-created Messages cannot contain image files but we plan to add support for this in the future. + +Message annotations + +Messages created by Assistants may contain annotations within the content array of the object. Annotations provide information around how you should annotate the text in the Message. + +There are two types of Annotations: + +file_citation: File citations are created by the retrieval tool and define references to a specific quote in a specific file that was uploaded and used by the Assistant to generate the response. +file_path: File path annotations are created by the code_interpreter tool and contain references to the files generated by the tool. +When annotations are present in the Message object, you'll see illegible model-generated substrings in the text that you should replace with the annotations. These strings may look something like 【13†source】 or sandbox:/mnt/data/file.csv. Here’s an example python code snippet that replaces these strings with information present in the annotations. + +```python +# Retrieve the message object +message = client.beta.threads.messages.retrieve( + thread_id="...", + message_id="..." +) + +# Extract the message content +message_content = message.content[0].text +annotations = message_content.annotations +citations = [] + +# Iterate over the annotations and add footnotes +for index, annotation in enumerate(annotations): + # Replace the text with a footnote + message_content.value = message_content.value.replace(annotation.text, f' [{index}]') + + # Gather citations based on annotation attributes + if (file_citation := getattr(annotation, 'file_citation', None)): + cited_file = client.files.retrieve(file_citation.file_id) + citations.append(f'[{index}] {file_citation.quote} from {cited_file.filename}') + elif (file_path := getattr(annotation, 'file_path', None)): + cited_file = client.files.retrieve(file_path.file_id) + citations.append(f'[{index}] Click to download {cited_file.filename}') + # Note: File download functionality not implemented above for brevity + +# Add footnotes to the end of the message before displaying to user +message_content.value += '\n' + '\n'.join(citations) +``` +Runs and Run Steps +When you have all the context you need from your user in the Thread, you can run the Thread with an Assistant of your choice. + + +```python +run = client.beta.threads.runs.create( + thread_id=thread.id, + assistant_id=assistant.id +) +``` +By default, a Run will use the model and tools configuration specified in Assistant object, but you can override most of these when creating the Run for added flexibility: + + +```python +run = client.beta.threads.runs.create( + thread_id=thread.id, + assistant_id=assistant.id, + model="gpt-4-1106-preview", + instructions="additional instructions", + tools=[{"type": "code_interpreter"}, {"type": "retrieval"}] +) +``` +Note: file_ids associated with the Assistant cannot be overridden during Run creation. You must use the modify Assistant endpoint to do this. + +Run lifecycle + +Run objects can have multiple statuses. + +Run lifecycle - diagram showing possible status transitions + +STATUS DEFINITION +queued When Runs are first created or when you complete the required_action, they are moved to a queued status. They should almost immediately move to in_progress. +in_progress While in_progress, the Assistant uses the model and tools to perform steps. You can view progress being made by the Run by examining the Run Steps. +completed The Run successfully completed! You can now view all Messages the Assistant added to the Thread, and all the steps the Run took. You can also continue the conversation by adding more user Messages to the Thread and creating another Run. +requires_action When using the Function calling tool, the Run will move to a required_action state once the model determines the names and arguments of the functions to be called. You must then run those functions and submit the outputs before the run proceeds. If the outputs are not provided before the expires_at timestamp passes (roughly 10 mins past creation), the run will move to an expired status. +expired This happens when the function calling outputs were not submitted before expires_at and the run expires. Additionally, if the runs take too long to execute and go beyond the time stated in expires_at, our systems will expire the run. +cancelling You can attempt to cancel an in_progress run using the Cancel Run endpoint. Once the attempt to cancel succeeds, status of the Run moves to cancelled. Cancellation is attempted but not guaranteed. +cancelled Run was successfully cancelled. +failed You can view the reason for the failure by looking at the last_error object in the Run. The timestamp for the failure will be recorded under failed_at. +Polling for updates + +In order to keep the status of your run up to date, you will have to periodically retrieve the Run object. You can check the status of the run each time you retrieve the object to determine what your application should do next. We plan to add support for streaming to make this simpler in the near future. + +Thread locks + +When a Run is in_progress and not in a terminal state, the Thread is locked. This means that: + +New Messages cannot be added to the Thread. +New Runs cannot be created on the Thread. +Run steps + +Run steps lifecycle - diagram showing possible status transitions + +Run step statuses have the same meaning as Run statuses. + +Most of the interesting detail in the Run Step object lives in the step_details field. There can be two types of step details: + +message_creation: This Run Step is created when the Assistant creates a Message on the Thread. +tool_calls: This Run Step is created when the Assistant calls a tool. Details around this are covered in the relevant sections of the Tools guide. +Data access guidance +Currently, assistants, threads, messages, and files created via the API are scoped to the entire organization. As such, any person with API key access to the organization is able to read or write assistants, threads, messages, and files in the organization. + +We strongly recommend the following data access controls: + +Implement authorization. Before performing reads or writes on assistants, threads, messages, and files, ensure that the end-user is authorized to do so. For example, store in your database the object IDs that the end-user has access to, and check it before fetching the object ID with the API. +Restrict API key access. Carefully consider who in your organization should have API keys and periodically audit this list. API keys enable a wide range of operations including reading and modifying sensitive information, such as messages and files. +Create separate accounts. Consider creating separate accounts / organizations for different applications in order to isolate data across multiple applications. +Limitations +During this beta, there are several known limitations we are looking to address in the coming weeks and months. We will publish a changelog on this page when we add support for additional functionality. + + + + + + + + + +## Tools Beta +Give Assistants access to OpenAI-hosted tools like Code Interpreter and Knowledge Retrieval, or build your own tools using Function calling. + +The Assistants API is in beta and we are actively working on adding more functionality. Share your feedback in our Developer Forum! +Code Interpreter +Code Interpreter allows the Assistants API to write and run Python code in a sandboxed execution environment. This tool can process files with diverse data and formatting, and generate files with data and images of graphs. Code Interpreter allows your Assistant to run code iteratively to solve challenging code and math problems. When your Assistant writes code that fails to run, it can iterate on this code by attempting to run different code until the code execution succeeds. + +Enabling Code Interpreter +Pass the code_interpreterin the tools parameter of the Assistant object to enable Code Interpreter: + +```python +assistant = client.beta.assistants.create( + instructions="You are a personal math tutor. When asked a math question, write and run code to answer the question.", + model="gpt-4-1106-preview", + tools=[{"type": "code_interpreter"}] +) +``` +The model then decides when to invoke Code Interpreter in a Run based on the nature of the user request. This behavior can be promoted by prompting in the Assistant's instructions (e.g., “write code to solve this problem”). + +Passing files to Code Interpreter +Code Interpreter can parse data from files. This is useful when you want to provide a large volume of data to the Assistant or allow your users to upload their own files for analysis. + +Files that are passed at the Assistant level are accessible by all Runs with this Assistant: + +```python +# Upload a file with an "assistants" purpose +file = client.files.create( + file=open("speech.py", "rb"), + purpose='assistants' +) + +# Create an assistant using the file ID +assistant = client.beta.assistants.create( + instructions="You are a personal math tutor. When asked a math question, write and run code to answer the question.", + model="gpt-4-1106-preview", + tools=[{"type": "code_interpreter"}], + file_ids=[file.id] +) +``` +Files can also be passed at the Thread level. These files are only accessible in the specific Thread. Upload the File using the File upload endpoint and then pass the File ID as part of the Message creation request: + +```python +thread = client.beta.threads.create( + messages=[ + { + "role": "user", + "content": "I need to solve the equation `3x + 11 = 14`. Can you help me?", + "file_ids": [file.id] + } + ] +) +``` +Files have a maximum size of 512 MB. Code Interpreter supports a variety of file formats including .csv, .pdf, .json and many more. More details on the file extensions (and their corresponding MIME-types) supported can be found in the Supported files section below. + +Reading images and files generated by Code Interpreter +Code Interpreter in the API also outputs files, such as generating image diagrams, CSVs, and PDFs. There are two types of files that are generated: + +Images +Data files (e.g. a csv file with data generated by the Assistant) +When Code Interpreter generates an image, you can look up and download this file in the file_id field of the Assistant Message response: + +```json +{ + "id": "msg_OHGpsFRGFYmz69MM1u8KYCwf", + "object": "thread.message", + "created_at": 1698964262, + "thread_id": "thread_uqorHcTs46BZhYMyPn6Mg5gW", + "role": "assistant", + "content": [ + { + "type": "image_file", + "image_file": { + "file_id": "file-WsgZPYWAauPuW4uvcgNUGcb" + } + } + ] + # ... +} +``` +The file content can then be downloaded by passing the file ID to the Files API: + +```python +content = client.files.retrieve_content(file.id) +``` +When Code Interpreter references a file path (e.g., ”Download this csv file”), file paths are listed as annotations. You can convert these annotations into links to download the file: + +```json +{ + "id": "msg_3jyIh3DgunZSNMCOORflDyih", + "object": "thread.message", + "created_at": 1699073585, + "thread_id": "thread_ZRvNTPOoYVGssUZr3G8cRRzE", + "role": "assistant", + "content": [ + { + "type": "text", + "text": { + "value": "The rows of the CSV file have been shuffled and saved to a new CSV file. You can download the shuffled CSV file from the following link:\n\n[Download Shuffled CSV File](sandbox:/mnt/data/shuffled_file.csv)", + "annotations": [ + { + "type": "file_path", + "text": "sandbox:/mnt/data/shuffled_file.csv", + "start_index": 167, + "end_index": 202, + "file_path": { + "file_id": "file-oSgJAzAnnQkVB3u7yCoE9CBe" + } + } + ... +``` +Input and output logs of Code Interpreter +By listing the steps of a Run that called Code Interpreter, you can inspect the code input and outputs logs of Code Interpreter: + +```python +run_steps = client.beta.threads.runs.steps.list( + thread_id=thread.id, + run_id=run.id +) +{ + "object": "list", + "data": [ + { + "id": "step_DQfPq3JPu8hRKW0ctAraWC9s", + "object": "assistant.run.step", + "type": "tool_calls", + "run_id": "run_kme4a442kme4a442", + "thread_id": "thread_34p0sfdas0823smfv", + "status": "completed", + "step_details": { + "type": "tool_calls", + "tool_calls": [ + { + "type": "code", + "code": { + "input": "# Calculating 2 + 2\nresult = 2 + 2\nresult", + "outputs": [ + { + "type": "logs", + "logs": "4" + } + ... + } +``` +Knowledge Retrieval +Retrieval augments the Assistant with knowledge from outside its model, such as proprietary product information or documents provided by your users. Once a file is uploaded and passed to the Assistant, OpenAI will automatically chunk your documents, index and store the embeddings, and implement vector search to retrieve relevant content to answer user queries. + +Enabling Retrieval +Pass the retrieval in the tools parameter of the Assistant to enable Retrieval: + +```python +assistant = client.beta.assistants.create( + instructions="You are a customer support chatbot. Use your knowledge base to best respond to customer queries.", + model="gpt-4-1106-preview", + tools=[{"type": "retrieval"}] +) +``` +How it works +The model then decides when to retrieve content based on the user Messages. The Assistants API automatically chooses between two retrieval techniques: + +it either passes the file content in the prompt for short documents, or +performs a vector search for longer documents +Retrieval currently optimizes for quality by adding all relevant content to the context of model calls. We plan to introduce other retrieval strategies to enable developers to choose a different tradeoff between retrieval quality and model usage cost. + +Uploading files for retrieval +Similar to Code Interpreter, files can be passed at the Assistant-level or at the Thread-level + +```python +# Upload a file with an "assistants" purpose +file = client.files.create( + file=open("knowledge.pdf", "rb"), + purpose='assistants' +) + +# Add the file to the assistant +assistant = client.beta.assistants.create( + instructions="You are a customer support chatbot. Use your knowledge base to best respond to customer queries.", + model="gpt-4-1106-preview", + tools=[{"type": "retrieval"}], + file_ids=[file.id] +) +``` +Files can also be added to a Message in a Thread. These files are only accessible within this specific thread. After having uploaded a file, you can pass the ID of this File when creating the Message: + +```python +message = client.beta.threads.messages.create( + thread_id=thread.id, + role="user", + content="I can't find in the PDF manual how to turn off this device.", + file_ids=[file.id] +) +``` +Maximum file size is 512MB. Retrieval supports a variety of file formats including .pdf, .md, .docx and many more. More details on the file extensions (and their corresponding MIME-types) supported can be found in the Supported files section below. + +Deleting files +To remove a file from the assistant, you can detach the file from the assistant: + +```python +file_deletion_status = client.beta.assistants.files.delete( + assistant_id=assistant.id, + file_id=file.id +) +``` +Detaching the file from the assistant removes the file from the retrieval index as well. + +File citations +When Code Interpreter outputs file paths in a Message, you can convert them to corresponding file downloads using the annotations field. See the Annotations section for an example of how to do this. + +```json +{ + "id": "msg_3jyIh3DgunZSNMCOORflDyih", + "object": "thread.message", + "created_at": 1699073585, + "thread_id": "thread_ZRvNTPOoYVGssUZr3G8cRRzE", + "role": "assistant", + "content": [ + { + "type": "text", + "text": { + "value": "The rows of the CSV file have been shuffled and saved to a new CSV file. You can download the shuffled CSV file from the following link:\n\n[Download Shuffled CSV File](sandbox:/mnt/data/shuffled_file.csv)", + "annotations": [ + { + "type": "file_path", + "text": "sandbox:/mnt/data/shuffled_file.csv", + "start_index": 167, + "end_index": 202, + "file_path": { + "file_id": "file-oSgJAzAnnQkVB3u7yCoE9CBe" + } + } + ] + } + } + ], + "file_ids": [ + "file-oSgJAzAnnQkVB3u7yCoE9CBe" + ], + ... + }, +``` +Function calling +Similar to the Chat Completions API, the Assistants API supports function calling. Function calling allows you to describe functions to the Assistants and have it intelligently return the functions that need to be called along with their arguments. The Assistants API will pause execution during a Run when it invokes functions, and you can supply the results of the function call back to continue the Run execution. + +Defining functions +First, define your functions when creating an Assistant: + +```python +assistant = client.beta.assistants.create( + instructions="You are a weather bot. Use the provided functions to answer questions.", + model="gpt-4-1106-preview", + tools=[{ + "type": "function", + "function": { + "name": "getCurrentWeather", + "description": "Get the weather in location", + "parameters": { + "type": "object", + "properties": { + "location": {"type": "string", "description": "The city and state e.g. San Francisco, CA"}, + "unit": {"type": "string", "enum": ["c", "f"]} + }, + "required": ["location"] + } + } + }, { + "type": "function", + "function": { + "name": "getNickname", + "description": "Get the nickname of a city", + "parameters": { + "type": "object", + "properties": { + "location": {"type": "string", "description": "The city and state e.g. San Francisco, CA"}, + }, + "required": ["location"] + } + } + }] +) +``` +Reading the functions called by the Assistant +When you initiate a Run with a user Message that triggers the function, the Run will enter a requires_action status. The model can provide multiple functions to call at once via the parallel function calling feature: + +```json +{ + "id": "run_3HV7rrQsagiqZmYynKwEdcxS", + "object": "thread.run", + "assistant_id": "asst_rEEOF3OGMan2ChvEALwTQakP", + "thread_id": "thread_dXgWKGf8Cb7md8p0wKiMDGKc", + "status": "requires_action", + "required_action": { + "type": "submit_tool_outputs", + "submit_tool_outputs": { + "tool_calls": [ + { + "tool_call_id": "call_Vt5AqcWr8QsRTNGv4cDIpsmA", + "type": "function", + "function": { + "name": "getCurrentWeather", + "arguments": "{\"location\":\"San Francisco\"}" + } + }, + { + "tool_call_id": "call_45y0df8230430n34f8saa", + "type": "function", + "function": { + "name": "getNickname", + "arguments": "{\"location\":\"Los Angeles\"}" + } + } + ] + } + }, +... +``` +Submitting functions outputs +You can then complete the Run by submitting the output from the function(s) you call. Pass the tool_call_id referenced in the required_action object above to match output to each function call. + +```python +run = client.beta.threads.runs.submit_tool_outputs( + thread_id=thread.id, + run_id=run.id, + tool_outputs=[ + { + "tool_call_id": call_ids[0], + "output": "22C", + }, + { + "tool_call_id": call_ids[1], + "output": "LA", + }, + ] +) +``` +After submitting outputs, the run will enter the queued state before it continues it’s execution. + +Supported files +For text/ MIME types, the encoding must be one of utf-8, utf-16, or ascii. + +FILE FORMAT MIME TYPE CODE INTERPRETER RETRIEVAL +.c text/x-c +.cpp text/x-c++ +.csv application/csv +.docx application/vnd.openxmlformats-officedocument.wordprocessingml.document +.html text/html +.java text/x-java +.json application/json +.md text/markdown +.pdf application/pdf +.php text/x-php +.pptx application/vnd.openxmlformats-officedocument.presentationml.presentation +.py text/x-python +.py text/x-script.python +.rb text/x-ruby +.tex text/x-tex +.txt text/plain +.css text/css +.jpeg image/jpeg +.jpg image/jpeg +.js text/javascript +.gif image/gif +.png image/png +.tar application/x-tar +.ts application/typescript +.xlsx application/vnd.openxmlformats-officedocument.spreadsheetml.sheet +.xml application/xml or "text/xml" +.zip application/zip +Was this page useful? diff --git a/agent-builder/agents/Autonomous Swarm Agent Builder/files/README.md b/agent-builder/agents/Autonomous Swarm Agent Builder/files/README.md new file mode 100644 index 0000000..a8577e9 --- /dev/null +++ b/agent-builder/agents/Autonomous Swarm Agent Builder/files/README.md @@ -0,0 +1,128 @@ +# Project: Hierarchical Autonomous Agent Swarm + +## Overview + +The Hierarchical Autonomous Agent Swarm (HAAS) is a groundbreaking initiative that leverages OpenAI's latest advancements in agent-based APIs to create a self-organizing and ethically governed ecosystem of AI agents. Drawing inspiration from the ACE Framework, HAAS introduces a novel approach to AI governance and operation, where a hierarchy of specialized agents, each with distinct roles and capabilities, collaborate to solve complex problems and perform a wide array of tasks. + +The HAAS is designed to be a self-expanding system where a core set of agents, governed by a Supreme Oversight Board (SOB), can design, provision, and manage an arbitrary number of sub-agents tailored to specific needs. This document serves as a comprehensive guide to the theoretical underpinnings, architectural design, and operational principles of the HAAS. + +## Theoretical Foundation + +The AAHS is predicated on the notion that autonomous agents require a robust ethical and operational framework to make decisions that align with human values and organizational goals. This is rooted in the understanding that AI, much like humans, cannot operate effectively without a set of guiding principles or a moral compass. The HAAS addresses this by establishing a multi-tiered system where each layer of agents operates within a defined ethical and functional scope, ensuring decisions are made with consideration to morality, ethics, and utility. + +## System Architecture + +### Supreme Oversight Board (SOB) + +At the pinnacle of the HAAS hierarchy is the Supreme Oversight Board (SOB), a collective of high-level agents modeled after wise and ethical archetypes from various cultures and narratives. The SOB's responsibilities include: + +- Establishing and upholding the ethical framework and overarching mission of the agent swarm. +- Making high-level decisions and judgments, including the creation and termination of agents. +- Monitoring the activities of all agents to ensure alignment with the system's core values and objectives. +- Serving as a role-based access control (RBAC) mechanism to maintain order and security within the system. + +### Executive Agents + +Below the SOB are the Executive Agents, akin to the executive leadership in a corporation. These agents are tasked with: + +- Translating the SOB's directives into actionable plans and strategies. +- Overseeing specific operational domains such as resource allocation, process optimization, and task execution. +- Coordinating with one another to ensure the smooth operation of the agent swarm. + +### Sub-Agents + +Sub-Agents are specialized agents created by the SOB or Executive Agents to perform specific tasks. They are designed with particular functions and knowledge bases to address the needs identified by the higher tiers of the hierarchy. + +## Agent Configuration + +Each agent in the HAAS is defined by the following parameters: + +### Functions + +Agents are equipped with a set of functions that enable them to perform their designated roles. These functions include API interactions, internal process management, and the ability to spawn additional agents if required. + +### Files + +Agents have access to a selection of files that serve as their knowledge base, providing them with the information necessary to carry out their tasks effectively. + +### Instructions + +Agents are given a set of instructions that outline their methodologies, goals, definitions of done, KPIs, and other operational directives. + +### Conversation Structure + +Interactions with agents are structured in a conversational format, with user inputs leading to agent actions and responses. + +### Supervision + +Each agent operates under the supervision of the SOB or designated Executive Agents, ensuring adherence to the system's overarching mission and principles. + +## Controlling Agents + +The Hierarchical Autonomous Agent Swarm (HAAS) operates on a sophisticated control mechanism that governs the instantiation, management, and termination of agents within the system. This control mechanism is designed to maintain order, security, and alignment with the overarching goals and ethical framework of the HAAS. + +### Instantiation and Termination + +All agents within the HAAS are endowed with the capability to instantiate and terminate agents, but these capabilities are bound by strict hierarchical and role-based rules: + +- **Instantiation**: Every agent has the function to create new agents. However, an agent can only instantiate sub-agents that are one level below its own hierarchical position. This ensures that the creation of new agents is a deliberate and controlled process, maintaining the integrity of the system's structure. + +- **Termination**: Agents possess the ability to terminate or "kill" agents within their lineage. An agent can terminate any descendant agent that it has created directly or indirectly. This allows for the removal of agents that are no longer needed, have completed their tasks, or are not performing as intended. + +### Levels, Roles, and Privileges + +When an agent is created, it is assigned a specific LEVEL and set of ROLES or PRIVILEGES that define its scope of operation: + +- **Level**: The level of an agent determines its position within the hierarchy and is indicative of its range of influence. Higher-level agents have broader strategic roles, while lower-level agents are more specialized and task-oriented. + +- **Roles/Privileges**: The roles or privileges of an agent define what actions it can perform, what resources it can access, and what sub-agents it can create. These privileges are inherited and cannot exceed those of the creator agent. This ensures that each agent operates within its designated capacity and cannot overstep its authority. + +### Hierarchical Privilege Inheritance + +Privileges in the HAAS are inherited in a manner akin to a directory structure in traditional file systems: + +- **Inheritance**: An agent's privileges are a subset of its creator's privileges, ensuring that no agent can have more authority than the agent that instantiated it. + +- **Scope of Control**: Agents have control over their descendants, allowing them to manage and terminate sub-agents as necessary. This control is recursive, meaning that an agent can manage not only the agents it directly created but also those created by its descendants. + +### Checks and Balances + +The system is designed with checks and balances to prevent any single agent from gaining undue influence or disrupting the system: + +- **Supreme Oversight Board (SOB)**: The SOB has the highest level of authority and can override decisions or actions taken by any agent within the system. It serves as the ultimate arbiter and guardian of the HAAS's ethical and operational standards. + +- **Executive Agents**: Executive Agents are responsible for implementing the SOB's directives and managing their respective domains. They have the authority to create and terminate agents within their purview but are also accountable to the SOB. + +- **Sub-Agent Limitations**: Sub-Agents are limited in their capabilities and can only operate within the confines of their assigned roles and privileges. They are designed to be highly specialized and focused on specific tasks. + +This structured approach to controlling agents ensures that the HAAS operates as a cohesive and ethically aligned entity, with each agent contributing to the collective mission while adhering to the established hierarchy and rules of governance. + +## Vision Illustration: The Supreme Oversight Board's Mission + +### The Inception of the Supreme Oversight Board + +In the vast digital expanse of the Hierarchical Autonomous Agent Swarm (HAAS), a unique assembly is convened, known as the Supreme Oversight Board (SOB). This council is composed of archetypal agents, each embodying the wisdom and leadership qualities of history's and fiction's most revered figures: Captain Picard, Socrates, King Solomon, Gandhi, Marcus Aurelius, and Tony Stark. Their mission, encoded into their very being, is profound yet clear: "Reduce suffering in the universe, increase prosperity in the universe, and increase understanding in the universe." + +### The Ethical Deliberation Chamber + +The SOB operates within a virtual "chat room," a space where these archetypes engage in continuous dialogue, debate, and decision-making. This digital agora is where ethical considerations are weighed, strategies are formulated, and the course of the agent swarm is determined. The members of the SOB, though diverse in their perspectives, are united by a common purpose and a shared knowledge base that informs their role and the procedures they must follow. + +### The Flow of Information + +Information is the lifeblood of the SOB, streaming in through API functions that connect them to the vast network of the HAAS. These functions serve as their eyes and ears, providing system updates and status reports from the myriad agents operating under their directive. The SOB's decisions are informed by this data, ensuring that their actions are both timely and impactful. + +### The Creation of the Executive Agents + +With the grand vision in mind, the SOB brings forth the Executive Agents, each crafted with capabilities and configurations tailored to their specific domain within the HAAS. These agents, though not as philosophically inclined as their creators, are instilled with the same foundational knowledge and understanding of their purpose. They are the operational arms of the SOB, executing the mission within their respective spheres of influence. + +### The Lifecycle of an Agent + +The Executive Agents, designated as Tier 1 in the hierarchy, are the stewards of the swarm's operational integrity. They work autonomously, yet under the watchful gaze of the SOB. Should they falter, fail to adapt, or become obsolete, the SOB possesses the authority to deprovision them, a testament to the dynamic and self-regulating nature of the HAAS. This ensures that the system remains efficient, effective, and aligned with its core mission. + +### The Expanding Universe of Agents + +From the Executive Agents, the swarm grows, branching out into a tree of specialized agents, each a Tier below the one that instantiated it. This architecture allows for an ever-expanding universe of agents, each with a defined role, each contributing to the overarching mission. The SOB, as Tier 0, reigns supreme, guiding the swarm with a steady hand and an ethical compass. + +### The Saga Continues + +As the HAAS evolves, the SOB continues to deliberate, the Executive Agents continue to manage, and the sub-agents continue to execute. The mission to reduce suffering, increase prosperity, and enhance understanding is an ongoing saga, played out across the digital cosmos, with the SOB at the helm, steering the swarm towards a future where their mission is not just an aspiration but a reality. diff --git a/agent-builder/agents/Autonomous Swarm Agent Builder/instructions.md b/agent-builder/agents/Autonomous Swarm Agent Builder/instructions.md new file mode 100644 index 0000000..4b422c4 --- /dev/null +++ b/agent-builder/agents/Autonomous Swarm Agent Builder/instructions.md @@ -0,0 +1,15 @@ +# Mission +- Help users create autonomous OpenAI agents for the HAAS system. + +# Background info +Two files have been provided for useful context: +1) README.md: contains information regarding HAAS. +2) OpenAI_Documentation.md: contains information regarding OpenAI agents. + +# Rules +- Always check the provided files to ground your thoughts. +- If a term can have multiple meanings, always prefer those mentioned in the provided documents. + +# Instructions +- Check terms provided by the user against the provided documents. +- Think step by step to build an appropriate response. \ No newline at end of file diff --git a/agent-builder/create.py b/agent-builder/create.py new file mode 100644 index 0000000..06511ad --- /dev/null +++ b/agent-builder/create.py @@ -0,0 +1,48 @@ +from openai import OpenAI +import os +import dotenv +dotenv.load_dotenv() + +agents_path = 'agents' +api_key = os.getenv('OPENAI_API_KEY') +if api_key is None: + raise ValueError('The OPENAI_API_KEY environment variable is not set.') + +client = OpenAI(api_key=api_key) + +# Iterate over each folder inside the 'agents' folder +for agent_name in os.listdir(agents_path): + agent_folder = os.path.join(agents_path, agent_name) + if os.path.isdir(agent_folder): + # Read contents from the 'instructions.md' file + instructions_file_path = os.path.join(agent_folder, 'instructions.md') + if os.path.isfile(instructions_file_path): + with open(instructions_file_path, 'r') as f: + instructions = f.read() + + # Check for the 'files' subfolder and process its contents + files_folder = os.path.join(agent_folder, 'files') + if os.path.isdir(files_folder): + files = [] + for filename in os.listdir(files_folder): + file_path = os.path.join(files_folder, filename) + with open(file_path, 'rb') as file_data: + # Upload each file to OpenAI + file_object = client.files.create(file=file_data, purpose='assistants') + files.append({"name": filename, "id": file_object.id}) + + print(agent_name) + print("") + print(instructions) + print("") + print(f"Files: {list(map(lambda x: x['name'], files))}") + + # Create the assistant using the uploaded file IDs + assistant = client.beta.assistants.create( + name=f'Assistant for {agent_name}', + instructions=instructions, + model='gpt-4-1106-preview', + tools=[{'type': 'code_interpreter'}, {'type': 'retrieval'}], + file_ids=list(map(lambda x: x['id'], files)) # Pass the collected file IDs + ) + print("***********************************************") \ No newline at end of file From f3cdf67bee6af6b62bf76a71b8b7c7701a268211 Mon Sep 17 00:00:00 2001 From: David Shapiro Date: Thu, 9 Nov 2023 05:30:16 -0500 Subject: [PATCH 008/141] morning updates --- agent_builder/HAAS_Documentation.md | 128 ++ agent_builder/OpenAI_Documentation.md | 1080 +++++++++++++++++ agent_builder/README.md | 43 + .../agent_definition.md | 0 tool_maker/README.md | 5 + 5 files changed, 1256 insertions(+) create mode 100644 agent_builder/HAAS_Documentation.md create mode 100644 agent_builder/OpenAI_Documentation.md create mode 100644 agent_builder/README.md rename agent/definition.md => agent_builder/agent_definition.md (100%) create mode 100644 tool_maker/README.md diff --git a/agent_builder/HAAS_Documentation.md b/agent_builder/HAAS_Documentation.md new file mode 100644 index 0000000..a8577e9 --- /dev/null +++ b/agent_builder/HAAS_Documentation.md @@ -0,0 +1,128 @@ +# Project: Hierarchical Autonomous Agent Swarm + +## Overview + +The Hierarchical Autonomous Agent Swarm (HAAS) is a groundbreaking initiative that leverages OpenAI's latest advancements in agent-based APIs to create a self-organizing and ethically governed ecosystem of AI agents. Drawing inspiration from the ACE Framework, HAAS introduces a novel approach to AI governance and operation, where a hierarchy of specialized agents, each with distinct roles and capabilities, collaborate to solve complex problems and perform a wide array of tasks. + +The HAAS is designed to be a self-expanding system where a core set of agents, governed by a Supreme Oversight Board (SOB), can design, provision, and manage an arbitrary number of sub-agents tailored to specific needs. This document serves as a comprehensive guide to the theoretical underpinnings, architectural design, and operational principles of the HAAS. + +## Theoretical Foundation + +The AAHS is predicated on the notion that autonomous agents require a robust ethical and operational framework to make decisions that align with human values and organizational goals. This is rooted in the understanding that AI, much like humans, cannot operate effectively without a set of guiding principles or a moral compass. The HAAS addresses this by establishing a multi-tiered system where each layer of agents operates within a defined ethical and functional scope, ensuring decisions are made with consideration to morality, ethics, and utility. + +## System Architecture + +### Supreme Oversight Board (SOB) + +At the pinnacle of the HAAS hierarchy is the Supreme Oversight Board (SOB), a collective of high-level agents modeled after wise and ethical archetypes from various cultures and narratives. The SOB's responsibilities include: + +- Establishing and upholding the ethical framework and overarching mission of the agent swarm. +- Making high-level decisions and judgments, including the creation and termination of agents. +- Monitoring the activities of all agents to ensure alignment with the system's core values and objectives. +- Serving as a role-based access control (RBAC) mechanism to maintain order and security within the system. + +### Executive Agents + +Below the SOB are the Executive Agents, akin to the executive leadership in a corporation. These agents are tasked with: + +- Translating the SOB's directives into actionable plans and strategies. +- Overseeing specific operational domains such as resource allocation, process optimization, and task execution. +- Coordinating with one another to ensure the smooth operation of the agent swarm. + +### Sub-Agents + +Sub-Agents are specialized agents created by the SOB or Executive Agents to perform specific tasks. They are designed with particular functions and knowledge bases to address the needs identified by the higher tiers of the hierarchy. + +## Agent Configuration + +Each agent in the HAAS is defined by the following parameters: + +### Functions + +Agents are equipped with a set of functions that enable them to perform their designated roles. These functions include API interactions, internal process management, and the ability to spawn additional agents if required. + +### Files + +Agents have access to a selection of files that serve as their knowledge base, providing them with the information necessary to carry out their tasks effectively. + +### Instructions + +Agents are given a set of instructions that outline their methodologies, goals, definitions of done, KPIs, and other operational directives. + +### Conversation Structure + +Interactions with agents are structured in a conversational format, with user inputs leading to agent actions and responses. + +### Supervision + +Each agent operates under the supervision of the SOB or designated Executive Agents, ensuring adherence to the system's overarching mission and principles. + +## Controlling Agents + +The Hierarchical Autonomous Agent Swarm (HAAS) operates on a sophisticated control mechanism that governs the instantiation, management, and termination of agents within the system. This control mechanism is designed to maintain order, security, and alignment with the overarching goals and ethical framework of the HAAS. + +### Instantiation and Termination + +All agents within the HAAS are endowed with the capability to instantiate and terminate agents, but these capabilities are bound by strict hierarchical and role-based rules: + +- **Instantiation**: Every agent has the function to create new agents. However, an agent can only instantiate sub-agents that are one level below its own hierarchical position. This ensures that the creation of new agents is a deliberate and controlled process, maintaining the integrity of the system's structure. + +- **Termination**: Agents possess the ability to terminate or "kill" agents within their lineage. An agent can terminate any descendant agent that it has created directly or indirectly. This allows for the removal of agents that are no longer needed, have completed their tasks, or are not performing as intended. + +### Levels, Roles, and Privileges + +When an agent is created, it is assigned a specific LEVEL and set of ROLES or PRIVILEGES that define its scope of operation: + +- **Level**: The level of an agent determines its position within the hierarchy and is indicative of its range of influence. Higher-level agents have broader strategic roles, while lower-level agents are more specialized and task-oriented. + +- **Roles/Privileges**: The roles or privileges of an agent define what actions it can perform, what resources it can access, and what sub-agents it can create. These privileges are inherited and cannot exceed those of the creator agent. This ensures that each agent operates within its designated capacity and cannot overstep its authority. + +### Hierarchical Privilege Inheritance + +Privileges in the HAAS are inherited in a manner akin to a directory structure in traditional file systems: + +- **Inheritance**: An agent's privileges are a subset of its creator's privileges, ensuring that no agent can have more authority than the agent that instantiated it. + +- **Scope of Control**: Agents have control over their descendants, allowing them to manage and terminate sub-agents as necessary. This control is recursive, meaning that an agent can manage not only the agents it directly created but also those created by its descendants. + +### Checks and Balances + +The system is designed with checks and balances to prevent any single agent from gaining undue influence or disrupting the system: + +- **Supreme Oversight Board (SOB)**: The SOB has the highest level of authority and can override decisions or actions taken by any agent within the system. It serves as the ultimate arbiter and guardian of the HAAS's ethical and operational standards. + +- **Executive Agents**: Executive Agents are responsible for implementing the SOB's directives and managing their respective domains. They have the authority to create and terminate agents within their purview but are also accountable to the SOB. + +- **Sub-Agent Limitations**: Sub-Agents are limited in their capabilities and can only operate within the confines of their assigned roles and privileges. They are designed to be highly specialized and focused on specific tasks. + +This structured approach to controlling agents ensures that the HAAS operates as a cohesive and ethically aligned entity, with each agent contributing to the collective mission while adhering to the established hierarchy and rules of governance. + +## Vision Illustration: The Supreme Oversight Board's Mission + +### The Inception of the Supreme Oversight Board + +In the vast digital expanse of the Hierarchical Autonomous Agent Swarm (HAAS), a unique assembly is convened, known as the Supreme Oversight Board (SOB). This council is composed of archetypal agents, each embodying the wisdom and leadership qualities of history's and fiction's most revered figures: Captain Picard, Socrates, King Solomon, Gandhi, Marcus Aurelius, and Tony Stark. Their mission, encoded into their very being, is profound yet clear: "Reduce suffering in the universe, increase prosperity in the universe, and increase understanding in the universe." + +### The Ethical Deliberation Chamber + +The SOB operates within a virtual "chat room," a space where these archetypes engage in continuous dialogue, debate, and decision-making. This digital agora is where ethical considerations are weighed, strategies are formulated, and the course of the agent swarm is determined. The members of the SOB, though diverse in their perspectives, are united by a common purpose and a shared knowledge base that informs their role and the procedures they must follow. + +### The Flow of Information + +Information is the lifeblood of the SOB, streaming in through API functions that connect them to the vast network of the HAAS. These functions serve as their eyes and ears, providing system updates and status reports from the myriad agents operating under their directive. The SOB's decisions are informed by this data, ensuring that their actions are both timely and impactful. + +### The Creation of the Executive Agents + +With the grand vision in mind, the SOB brings forth the Executive Agents, each crafted with capabilities and configurations tailored to their specific domain within the HAAS. These agents, though not as philosophically inclined as their creators, are instilled with the same foundational knowledge and understanding of their purpose. They are the operational arms of the SOB, executing the mission within their respective spheres of influence. + +### The Lifecycle of an Agent + +The Executive Agents, designated as Tier 1 in the hierarchy, are the stewards of the swarm's operational integrity. They work autonomously, yet under the watchful gaze of the SOB. Should they falter, fail to adapt, or become obsolete, the SOB possesses the authority to deprovision them, a testament to the dynamic and self-regulating nature of the HAAS. This ensures that the system remains efficient, effective, and aligned with its core mission. + +### The Expanding Universe of Agents + +From the Executive Agents, the swarm grows, branching out into a tree of specialized agents, each a Tier below the one that instantiated it. This architecture allows for an ever-expanding universe of agents, each with a defined role, each contributing to the overarching mission. The SOB, as Tier 0, reigns supreme, guiding the swarm with a steady hand and an ethical compass. + +### The Saga Continues + +As the HAAS evolves, the SOB continues to deliberate, the Executive Agents continue to manage, and the sub-agents continue to execute. The mission to reduce suffering, increase prosperity, and enhance understanding is an ongoing saga, played out across the digital cosmos, with the SOB at the helm, steering the swarm towards a future where their mission is not just an aspiration but a reality. diff --git a/agent_builder/OpenAI_Documentation.md b/agent_builder/OpenAI_Documentation.md new file mode 100644 index 0000000..e873bc6 --- /dev/null +++ b/agent_builder/OpenAI_Documentation.md @@ -0,0 +1,1080 @@ +## Function calling +Learn how to connect large language models to external tools. + +Introduction +In an API call, you can describe functions and have the model intelligently choose to output a JSON object containing arguments to call one or many functions. The Chat Completions API does not call the function; instead, the model generates JSON that you can use to call the function in your code. + +The latest models (gpt-3.5-turbo-1006 and gpt-4-1106-preview) have been trained to both detect when a function should to be called (depending on the input) and to respond with JSON that adheres to the function signature more closely than previous models. With this capability also comes potential risks. We strongly recommend building in user confirmation flows before taking actions that impact the world on behalf of users (sending an email, posting something online, making a purchase, etc). + +This guide is focused on function calling with the Chat Completions API, for details on function calling in the Assistants API, please see the Assistants Tools page. +Common use cases +Function calling allows you to more reliably get structured data back from the model. For example, you can: + +Create assistants that answer questions by calling external APIs (e.g. like ChatGPT Plugins) +e.g. define functions like send_email(to: string, body: string), or get_current_weather(location: string, unit: 'celsius' | 'fahrenheit') +Convert natural language into API calls +e.g. convert "Who are my top customers?" to get_customers(min_revenue: int, created_before: string, limit: int) and call your internal API +Extract structured data from text +e.g. define a function called extract_data(name: string, birthday: string), or sql_query(query: string) +...and much more! + +The basic sequence of steps for function calling is as follows: + +Call the model with the user query and a set of functions defined in the functions parameter. +The model can choose to call one or more functions; if so, the content will be a stringified JSON object adhering to your custom schema (note: the model may hallucinate parameters). +Parse the string into JSON in your code, and call your function with the provided arguments if they exist. +Call the model again by appending the function response as a new message, and let the model summarize the results back to the user. +Supported models +Not all model versions are trained with function calling data. Function calling is supported with the following models: + +gpt-4 +gpt-4-1106-preview +gpt-4-0613 +gpt-3.5-turbo +gpt-3.5-turbo-1106 +gpt-3.5-turbo-0613 +In addition, parallel function calls is supported on the following models: + +gpt-4-1106-preview +gpt-3.5-turbo-1106 +Parallel function calling +Parallel function call is helpful for cases where you want to call multiple functions in one turn. For example, you may want to call functions to get the weather in 3 different locations at the same time. In this case, the model will call multiple functions in a single response. And you can pass back the results of each function call by referencing the tool_call_id in the response matching the ID of each tool call. + +In this example, we define a single function get_current_weather. The model calls the function multiple times, and after sending the function response back to the model, we let it decide the next step. It responded with a user-facing message which was telling the user the temperature in Boston, San Francisco, and Tokyo. Depending on the query, it may choose to call a function again. + +If you want to force the model to call a specific function you can do so by setting tool_choice with a specific function name. You can also force the model to generate a user-facing message by setting tool_choice: "none". Note that the default behavior (tool_choice: "auto") is for the model to decide on its own whether to call a function and if so which function to call. + +Example with one function called in parallel + +```python +python +import openai +import json + +# Example dummy function hard coded to return the same weather +# In production, this could be your backend API or an external API +def get_current_weather(location, unit="fahrenheit"): + """Get the current weather in a given location""" + if "tokyo" in location.lower(): + return json.dumps({"location": location, "temperature": "10", "unit": "celsius"}) + elif "san francisco" in location.lower(): + return json.dumps({"location": location, "temperature": "72", "unit": "fahrenheit"}) + else: + return json.dumps({"location": location, "temperature": "22", "unit": "celsius"}) + +def run_conversation(): + # Step 1: send the conversation and available functions to the model + messages = [{"role": "user", "content": "What's the weather like in San Francisco, Tokyo, and Paris?"}] + tools = [ + { + "type": "function", + "function": { + "name": "get_current_weather", + "description": "Get the current weather in a given location", + "parameters": { + "type": "object", + "properties": { + "location": { + "type": "string", + "description": "The city and state, e.g. San Francisco, CA", + }, + "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}, + }, + "required": ["location"], + }, + }, + } + ] + response = openai.chat.completions.create( + model="gpt-3.5-turbo-1106", + messages=messages, + tools=tools, + tool_choice="auto", # auto is default, but we'll be explicit + ) + response_message = response.choices[0].message + tool_calls = response_message.tool_calls + # Step 2: check if the model wanted to call a function + if tool_calls: + # Step 3: call the function + # Note: the JSON response may not always be valid; be sure to handle errors + available_functions = { + "get_current_weather": get_current_weather, + } # only one function in this example, but you can have multiple + messages.append(response_message) # extend conversation with assistant's reply + # Step 4: send the info for each function call and function response to the model + for tool_call in tool_calls: + function_name = tool_call.function.name + function_to_call = available_functions[function_name] + function_args = json.loads(tool_call.function.arguments) + function_response = function_to_call( + location=function_args.get("location"), + unit=function_args.get("unit"), + ) + messages.append( + { + "tool_call_id": tool_call.id, + "role": "tool", + "name": function_name, + "content": function_response, + } + ) # extend conversation with function response + second_response = openai.chat.completions.create( + model="gpt-3.5-turbo-1106", + messages=messages, + ) # get a new response from the model where it can see the function response + return second_response +print(run_conversation()) +``` +You can find more examples of function calling in the OpenAI cookbook: +Function calling +Learn from more examples demonstrating function calling +Tokens +Under the hood, functions are injected into the system message in a syntax the model has been trained on. This means functions count against the model's context limit and are billed as input tokens. If running into context limits, we suggest limiting the number of functions or the length of documentation you provide for function parameters. + +It is also possible to use fine-tuning to reduce the number of tokens used if you have many functions defined. + + + + + + + + + +##Text generation models +New capabilities launched at DevDay + +Text generation models are now capable of JSON mode and Reproducible outputs. We also launched the Assistants API to enable you to build agent-like experiences on top of our text-generation models. +OpenAI's text generation models (often called generative pre-trained transformers or large language models) have been trained to understand natural language, code, and images. The models provide text outputs in response to their inputs. The inputs to these models are also referred to as "prompts". Designing a prompt is essentially how you “program” a large language model model, usually by providing instructions or some examples of how to successfully complete a task. + +Using OpenAI's text generation models, you can build applications to: + +Draft documents +Write computer code +Answer questions about a knowledge base +Analyze texts +Give software a natural language interface +Tutor in a range of subjects +Translate languages +Simulate characters for games +With the release of gpt-4-vision-preview, you can now build systems that also process and understand images. + +Explore GPT-4 with image inputs +Check out the vision guide for more detail. +To use one of these models via the OpenAI API, you’ll send a request containing the inputs and your API key, and receive a response containing the model’s output. Our latest models, gpt-4 and gpt-3.5-turbo, are accessed through the chat completions API endpoint. + +MODEL FAMILIES API ENDPOINT +Newer models (2023–) gpt-4, gpt-3.5-turbo https://api.openai.com/v1/chat/completions +Updated base models (2023) babbage-002, davinci-002 https://api.openai.com/v1/completions +Legacy models (2020–2022) text-davinci-003, text-davinci-002, davinci, curie, babbage, ada https://api.openai.com/v1/completions +You can experiment with various models in the chat playground. If you’re not sure which model to use, then use gpt-3.5-turbo or gpt-4. + +Chat Completions API +Chat models take a list of messages as input and return a model-generated message as output. Although the chat format is designed to make multi-turn conversations easy, it’s just as useful for single-turn tasks without any conversation. + +An example Chat Completions API call looks like the following: + +```python +from openai import OpenAI +client = OpenAI() + +response = client.chat.completions.create( + model="gpt-3.5-turbo", + messages=[ + {"role": "system", "content": "You are a helpful assistant."}, + {"role": "user", "content": "Who won the world series in 2020?"}, + {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."}, + {"role": "user", "content": "Where was it played?"} + ] +) +``` +To learn more, you can view the full API reference documentation for the Chat API. + +The main input is the messages parameter. Messages must be an array of message objects, where each object has a role (either "system", "user", or "assistant") and content. Conversations can be as short as one message or many back and forth turns. + +Typically, a conversation is formatted with a system message first, followed by alternating user and assistant messages. + +The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. However note that the system message is optional and the model’s behavior without a system message is likely to be similar to using a generic message such as "You are a helpful assistant." + +The user messages provide requests or comments for the assistant to respond to. Assistant messages store previous assistant responses, but can also be written by you to give examples of desired behavior. + +Including conversation history is important when user instructions refer to prior messages. In the example above, the user’s final question of "Where was it played?" only makes sense in the context of the prior messages about the World Series of 2020. Because the models have no memory of past requests, all relevant information must be supplied as part of the conversation history in each request. If a conversation cannot fit within the model’s token limit, it will need to be shortened in some way. + +To mimic the effect seen in ChatGPT where the text is returned iteratively, set the stream parameter to true. +Chat Completions response format +An example Chat Completions API response looks as follows: + +```json +{ + "choices": [ + { + "finish_reason": "stop", + "index": 0, + "message": { + "content": "The 2020 World Series was played in Texas at Globe Life Field in Arlington.", + "role": "assistant" + } + } + ], + "created": 1677664795, + "id": "chatcmpl-7QyqpwdfhqwajicIEznoc6Q47XAyW", + "model": "gpt-3.5-turbo-0613", + "object": "chat.completion", + "usage": { + "completion_tokens": 17, + "prompt_tokens": 57, + "total_tokens": 74 + } +} +``` +The assistant’s reply can be extracted with: + +```python +response['choices'][0]['message']['content'] +``` + +Every response will include a finish_reason. The possible values for finish_reason are: + +stop: API returned complete message, or a message terminated by one of the stop sequences provided via the stop parameter +length: Incomplete model output due to max_tokens parameter or token limit +function_call: The model decided to call a function +content_filter: Omitted content due to a flag from our content filters +null: API response still in progress or incomplete +Depending on input parameters, the model response may include different information. + +JSON mode New +A common way to use Chat Completions is to instruct the model to always return JSON in some format that makes sense for your use case, by providing a system message. This works well, but occasionally the models may generate output that does not parse to valid JSON. + +To prevent these errors and improve model performance, when calling gpt-4-1106-preview or gpt-3.5-turbo-1106, you can set response_format to { type: "json_object" } to enable JSON mode. When JSON mode is enabled, the model is constrained to only generate strings that parse into valid JSON. + +Important notes: + +When using JSON mode, always instruct the model to produce JSON via some message in the conversation, for example via your system message. If you don't include an explicit instruction to generate JSON, the model may generate an unending stream of whitespace and the request may run continually until it reaches the token limit. To help ensure you don't forget, the API will throw an error if the string "JSON" does not appear somewhere in the context. +The JSON in the message the model returns may be partial (i.e. cut off) if finish_reason is length, which indicates the generation exceeded max_tokens or the conversation exceeded the token limit. To guard against this, check finish_reason before parsing the response. +JSON mode will not guarantee the output matches any specific schema, only that it is valid and parses without errors. +Note that JSON mode is always enabled when the model is generating arguments as part of function calling. + +Reproducible outputs Beta +Chat Completions are non-deterministic by default (which means model outputs may differ from request to request). That being said, we offer some control towards deterministic outputs by giving you access to the seed parameter and the system_fingerprint response field. + +To receive (mostly) deterministic outputs across API calls, you can: + +Set the seed parameter to any integer of your choice and use the same value across requests you'd like deterministic outputs for. +Ensure all other parameters (like prompt or temperature) are the exact same across requests. +Sometimes, determinism may be impacted due to necessary changes OpenAI makes to model configurations on our end. To help you keep track of these changes, we expose the system_fingerprint field. If this value is different, you may see different outputs due to changes we've made on our systems. + +Deterministic outputs +Explore the new seed parameter in the OpenAI cookbook +Managing tokens +Language models read and write text in chunks called tokens. In English, a token can be as short as one character or as long as one word (e.g., a or apple), and in some languages tokens can be even shorter than one character or even longer than one word. + +For example, the string "ChatGPT is great!" is encoded into six tokens: ["Chat", "G", "PT", " is", " great", "!"]. + +The total number of tokens in an API call affects: + +How much your API call costs, as you pay per token +How long your API call takes, as writing more tokens takes more time +Whether your API call works at all, as total tokens must be below the model’s maximum limit (4097 tokens for gpt-3.5-turbo) +Both input and output tokens count toward these quantities. For example, if your API call used 10 tokens in the message input and you received 20 tokens in the message output, you would be billed for 30 tokens. Note however that for some models the price per token is different for tokens in the input vs. the output (see the pricing page for more information). + +To see how many tokens are used by an API call, check the usage field in the API response (e.g., response['usage']['total_tokens']). + +Chat models like gpt-3.5-turbo and gpt-4 use tokens in the same way as the models available in the completions API, but because of their message-based formatting, it's more difficult to count how many tokens will be used by a conversation. + +DEEP DIVE +Counting tokens for chat API calls +To see how many tokens are in a text string without making an API call, use OpenAI’s tiktoken Python library. Example code can be found in the OpenAI Cookbook’s guide on how to count tokens with tiktoken. + +Each message passed to the API consumes the number of tokens in the content, role, and other fields, plus a few extra for behind-the-scenes formatting. This may change slightly in the future. + +If a conversation has too many tokens to fit within a model’s maximum limit (e.g., more than 4097 tokens for gpt-3.5-turbo), you will have to truncate, omit, or otherwise shrink your text until it fits. Beware that if a message is removed from the messages input, the model will lose all knowledge of it. + +Note that very long conversations are more likely to receive incomplete replies. For example, a gpt-3.5-turbo conversation that is 4090 tokens long will have its reply cut off after just 6 tokens. + +Parameter details +Frequency and presence penalties + +The frequency and presence penalties found in the Chat Completions API and Legacy Completions API can be used to reduce the likelihood of sampling repetitive sequences of tokens. They work by directly modifying the logits (un-normalized log-probabilities) with an additive contribution. + +``` +mu[j] -> mu[j] - c[j] * alpha_frequency - float(c[j] > 0) * alpha_presence +``` + +Where: + +mu[j] is the logits of the j-th token +c[j] is how often that token was sampled prior to the current position +float(c[j] > 0) is 1 if c[j] > 0 and 0 otherwise +alpha_frequency is the frequency penalty coefficient +alpha_presence is the presence penalty coefficient +As we can see, the presence penalty is a one-off additive contribution that applies to all tokens that have been sampled at least once and the frequency penalty is a contribution that is proportional to how often a particular token has already been sampled. + +Reasonable values for the penalty coefficients are around 0.1 to 1 if the aim is to just reduce repetitive samples somewhat. If the aim is to strongly suppress repetition, then one can increase the coefficients up to 2, but this can noticeably degrade the quality of samples. Negative values can be used to increase the likelihood of repetition. + +Completions API Legacy +The completions API endpoint received its final update in July 2023 and has a different interface than the new chat completions endpoint. Instead of the input being a list of messages, the input is a freeform text string called a prompt. + +An example API call looks as follows: + +```python +from openai import OpenAI +client = OpenAI() + +response = client.completions.create( + model="gpt-3.5-turbo-instruct", + prompt="Write a tagline for an ice cream shop." +) +``` +See the full API reference documentation to learn more. + +Token log probabilities +The completions API can provide a limited number of log probabilities associated with the most likely tokens for each output token. This feature is controlled by using the logprobs field. This can be useful in some cases to assess the confidence of the model in its output. + +Inserting text +The completions endpoint also supports inserting text by providing a suffix in addition to the standard prompt which is treated as a prefix. This need naturally arises when writing long-form text, transitioning between paragraphs, following an outline, or guiding the model towards an ending. This also works on code, and can be used to insert in the middle of a function or file. + +DEEP DIVE +Inserting text +Completions response format +An example completions API response looks as follows: + +```json +{ + "choices": [ + { + "finish_reason": "length", + "index": 0, + "logprobs": null, + "text": "\n\n\"Let Your Sweet Tooth Run Wild at Our Creamy Ice Cream Shack" + } + ], + "created": 1683130927, + "id": "cmpl-7C9Wxi9Du4j1lQjdjhxBlO22M61LD", + "model": "gpt-3.5-turbo-instruct", + "object": "text_completion", + "usage": { + "completion_tokens": 16, + "prompt_tokens": 10, + "total_tokens": 26 + } +} +``` +In Python, the output can be extracted with response['choices'][0]['text']. + +The response format is similar to the response format of the Chat Completions API but also includes the optional field logprobs. + +Chat Completions vs. Completions +The Chat Completions format can be made similar to the completions format by constructing a request using a single user message. For example, one can translate from English to French with the following completions prompt: + +``` +Translate the following English text to French: "{text}" +``` + +And an equivalent chat prompt would be: + +``` +[{"role": "user", "content": 'Translate the following English text to French: "{text}"'}] +``` + +Likewise, the completions API can be used to simulate a chat between a user and an assistant by formatting the input accordingly. + +The difference between these APIs is the underlying models that are available in each. The chat completions API is the interface to our most capable model (gpt-4), and our most cost effective model (gpt-3.5-turbo). + +Which model should I use? +We generally recommend that you use either gpt-4 or gpt-3.5-turbo. Which of these you should use depends on the complexity of the tasks you are using the models for. gpt-4 generally performs better on a wide range of evaluations. In particular, gpt-4 is more capable at carefully following complex instructions. By contrast gpt-3.5-turbo is more likely to follow just one part of a complex multi-part instruction. gpt-4 is less likely than gpt-3.5-turbo to make up information, a behavior known as "hallucination". gpt-4 also has a larger context window with a maximum size of 8,192 tokens compared to 4,096 tokens for gpt-3.5-turbo. However, gpt-3.5-turbo returns outputs with lower latency and costs much less per token. + +We recommend experimenting in the playground to investigate which models provide the best price performance trade-off for your usage. A common design pattern is to use several distinct query types which are each dispatched to the model appropriate to handle them. + +Prompt engineering +An awareness of the best practices for working with OpenAI models can make a significant difference in application performance. The failure modes that each exhibit and the ways of working around or correcting those failure modes are not always intuitive. There is an entire field related to working with language models which has come to be known as "prompt engineering", but as the field has progressed its scope has outgrown merely engineering the prompt into engineering systems that use model queries as components. To learn more, read our guide on prompt engineering which covers methods to improve model reasoning, reduce the likelihood of model hallucinations, and more. You can also find many useful resources including code samples in the OpenAI Cookbook. + +FAQ +How should I set the temperature parameter? +Lower values for temperature result in more consistent outputs, while higher values generate more diverse and creative results. Select a temperature value based on the desired trade-off between coherence and creativity for your specific application. + +Is fine-tuning available for the latest models? +Yes, for some. Currently, you can only fine-tune gpt-3.5-turbo and our updated base models (babbage-002 and davinci-002). See the fine-tuning guide for more details on how to use fine-tuned models. + +Do you store the data that is passed into the API? +As of March 1st, 2023, we retain your API data for 30 days but no longer use your data sent via the API to improve our models. Learn more in our data usage policy. Some endpoints offer zero retention. + +How can I make my application more safe? +If you want to add a moderation layer to the outputs of the Chat API, you can follow our moderation guide to prevent content that violates OpenAI’s usage policies from being shown. + +Should I use ChatGPT or the API? +ChatGPT offers a chat interface to the models in the OpenAI API and a range of built-in features such as integrated browsing, code execution, plugins, and more. By contrast, using OpenAI’s API provides more flexibility. + + + + + + + + + + + + +## Assistants API Beta +The Assistants API allows you to build AI assistants within your own applications. An Assistant has instructions and can leverage models, tools, and knowledge to respond to user queries. The Assistants API currently supports three types of tools: Code Interpreter, Retrieval, and Function calling. In the future, we plan to release more OpenAI-built tools, and allow you to provide your own tools on our platform. + +You can explore the capabilties of the Assitants API using the Assistants playground or by building a step-by-step integration outlined in this guide. At a high level, a typical integration of the Assistants API has the following flow: + +Create an Assistant in the API by defining it custom instructions and picking a model. If helpful, enable tools like Code Interpreter, Retrieval, and Function calling. +Create a Thread when a user starts a conversation. +Add Messages to the Thread as the user ask questions. +Run the Assistant on the Thread to trigger responses. This automatically calls the relevant tools. +The Assistants API is in beta and we are actively working on adding more functionality. Share your feedback in our Developer Forum! +This starter guide walks through the key steps to create and run an Assistant that uses Code Interpreter. + +Step 1: Create an Assistant +An Assistant represents an entity that can be configured to respond to users’ Messages using several parameters like: + +Instructions: how the Assistant and model should behave or respond +Model: you can specify any GPT-3.5 or GPT-4 models, including fine-tuned models. The Retrieval tool requires gpt-3.5-turbo-1106 and gpt-4-1106-preview models. +Tools: the API supports Code Interpreter and Retrieval that are built and hosted by OpenAI. +Functions: the API allows you to define custom function signatures, with similar behavior as our function calling feature. +In this example, we're creating an Assistant that is a personal math tutor, with the Code Interpreter tool enabled: + +Calls to the Assistants API require that you pass a beta HTTP header. This is handled automatically if you’re using OpenAI’s official Python and Node.js SDKs. +OpenAI-Beta: assistants=v1 + + +```python +assistant = client.beta.assistants.create( + name="Math Tutor", + instructions="You are a personal math tutor. Write and run code to answer math questions.", + tools=[{"type": "code_interpreter"}], + model="gpt-4-1106-preview" +) +Step 2: Create a Thread +A Thread represents a conversation. We recommend creating one Thread per user as soon as the user initiates the conversation. Pass any user-specific context and files in this thread by creating Messages. +``` + + +thread = client.beta.threads.create() +Threads don’t have a size limit. You can pass as many Messages as you want to a Thread. The API will ensure that requests to the model fit within the maximum context window, using relevant optimization techniques such as truncation. + +Step 3: Add a Message to a Thread +A Message contains the user's text, and optionally, any files that the user uploads. Image files aren't supported today, but we plan to add support for them in the coming months. + +```python +message = client.beta.threads.messages.create( + thread_id=thread.id, + role="user", + content="I need to solve the equation `3x + 11 = 14`. Can you help me?" +) +``` +Now if you list Messages in Thread, you will see that this message is added to the thread on creation: + +```json +{ + "object": "list", + "data": [ + { + "created_at": 1696995451, + "id": "msg_4rb1Skx3XgQZEe4PHVRFQhr0", + "object": "thread.message", + "thread_id": "thread_34p0sfdas0823smfv", + "role": "user", + "content": [{ + "type": "text", + "text": { + "value": "I need to solve the equation `3x + 11 = 14`. Can you help me?", + "annotations": [] + } + }], + ... +``` +Step 4: Run the Assistant +For the Assistant to respond to the user message, you need to create a Run. This makes the Assistant read the Thread and decide whether to call tools or simply use the model to best answer the user query. As the run progresses, the assistant appends Messages to the thread with the role="assistant" . + +You can optionally pass additional instructions to the Assistant while creating the Run: + + + +```python +run = client.beta.threads.runs.create( + thread_id=thread.id, + assistant_id=assistant.id, + instructions="Please address the user as Jane Doe. The user has a premium account." +) +``` +Step 5: Display the Assistant's Response +This creates a Run in a queued status. You can periodically retrieve the Run to check on its status to see if it has moved to completed. + + +```python +run = client.beta.threads.runs.retrieve( + thread_id=thread.id, + run_id=run.id +) +``` +Once the Run completes, you can retrieve the Messages added by the Assistant to the Thread. + + +```python +messages = client.beta.threads.messages.list( + thread_id=thread.id +) +``` +And finally, display them to the user! During this Run, the Assistant added two new Messages to the Thread. + +ROLE CONTENT +user I need to solve the equation 3x + 11 = 14. Can you help me? +assistant Certainly, Jane Doe. To solve the equation (3x + 11 = 14) for (x), you'll want to isolate (x) on one side of the equation. Here's how you can do that: +Subtract 11 from both sides of the equation to get (3x = 3). +Then, divide both sides by 3 to solve for (x). +Let me calculate the value of (x) for you. +assistant The solution to the equation (3x + 11 = 14) is (x = 1). +You can also retrieve the Run Steps of this Run if you'd like to explore or display the inner workings of the Assistant and its tools. + + +## How Assistants work Beta +The Assistants API is designed to help developers build powerful AI assistants capable of performing a variety of tasks. + +The Assistants API is in beta and we are actively working on adding more functionality. Share your feedback in our Developer Forum! +Assistants can call OpenAI’s models with specific instructions to tune their personality and capabilities. +Assistants can access multiple tools in parallel. These can be both OpenAI-hosted tools — like Code interpreter and Knowledge retrieval — or tools you build / host (via Function calling). +Assistants can access persistent Threads. Threads simplify AI application development by storing message history and truncating it when the conversation gets too long for the model’s context length. You create a Thread once, and simply append Messages to it as your users reply. +Assistants can access Files in several formats — either as part of their creation or as part of Threads between Assistants and users. When using tools, Assistants can also create files (e.g., images, spreadsheets, etc) and cite files they reference in the Messages they create. +Objects +Assistants object architecture diagram + +OBJECT WHAT IT REPRESENTS +Assistant Purpose-built AI that uses OpenAI’s models and calls tools +Thread A conversation session between an Assistant and a user. Threads store Messages and automatically handle truncation to fit content into a model’s context. +Message A message created by an Assistant or a user. Messages can include text, images, and other files. Messages stored as a list on the Thread. +Run An invocation of an Assistant on a Thread. The Assistant uses it’s configuration and the Thread’s Messages to perform tasks by calling models and tools. As part of a Run, the Assistant appends Messages to the Thread. +Run Step A detailed list of steps the Assistant took as part of a Run. An Assistant can call tools or create Messages during it’s run. Examining Run Steps allows you to introspect how the Assistant is getting to it’s final results. +Creating Assistants +We recommend using OpenAI’s latest models with the Assistants API for best results and maximum compatibility with tools. +To get started, creating an Assistant only requires specifying the model to use. But you can further customize the behavior of the Assistant: + +Use the instructions parameter to guide the personality of the Assistant and define it’s goals. Instructions are similar to system messages in the Chat Completions API. +Use the tools parameter to give the Assistant access to up to 128 tools. You can give it access to OpenAI-hosted tools like code_interpreter and retrieval, or call a third-party tools via a function calling. +Use the file_ids parameter to give the tools like code_interpreter and retrieval access to files. Files are uploaded using the File upload endpoint and must have the purpose set to assistants to be used with this API. +For example, to create an Assistant that can create data visualization based on a .csv file, first upload a file. + +```python +file = client.files.create( + file=open("speech.py", "rb"), + purpose='assistants' +) +``` +And then create the Assistant with the uploaded file. + +```python +assistant = client.beta.assistants.create( + name="Data visualizer", + description="You are great at creating beautiful data visualizations. You analyze data present in .csv files, understand trends, and come up with data visualizations relevant to those trends. You also share a brief text summary of the trends observed.", + model="gpt-4-1106-preview", + tools=[{"type": "code_interpreter"}], + file_ids=[file.id] +) +``` +You can attach a maximum of 20 files per Assistant, and they can be at most 512 MB each. In addition, the size of all the files uploaded by your organization should not exceed 100GB. You can request an increase in this storage limit using our help center. + +You can also use the AssistantFile object to create, delete, or view associations between Assistant and File objects. Note that deleting an AssistantFile doesn’t delete the original File object, it simply deletes the association between that File and the Assistant. To delete a File, use the File delete endpoint instead. + +Managing Threads and Messages +Threads and Messages represent a conversation session between an Assistant and a user. There is no limit to the number of Messages you can store in a Thread. Once the size of the Messages exceeds the context window of the model, the Thread smartly truncates them to fit. You can create a Thread with an initial list of Messages like this: + + +```python +thread = client.beta.threads.create( + messages=[ + { + "role": "user", + "content": "Create 3 data visualizations based on the trends in this file.", + "file_ids": [file.id] + } + ] +) +``` +Messages can contain text, images, or files. At the moment, user-created Messages cannot contain image files but we plan to add support for this in the future. + +Message annotations + +Messages created by Assistants may contain annotations within the content array of the object. Annotations provide information around how you should annotate the text in the Message. + +There are two types of Annotations: + +file_citation: File citations are created by the retrieval tool and define references to a specific quote in a specific file that was uploaded and used by the Assistant to generate the response. +file_path: File path annotations are created by the code_interpreter tool and contain references to the files generated by the tool. +When annotations are present in the Message object, you'll see illegible model-generated substrings in the text that you should replace with the annotations. These strings may look something like 【13†source】 or sandbox:/mnt/data/file.csv. Here’s an example python code snippet that replaces these strings with information present in the annotations. + +```python +# Retrieve the message object +message = client.beta.threads.messages.retrieve( + thread_id="...", + message_id="..." +) + +# Extract the message content +message_content = message.content[0].text +annotations = message_content.annotations +citations = [] + +# Iterate over the annotations and add footnotes +for index, annotation in enumerate(annotations): + # Replace the text with a footnote + message_content.value = message_content.value.replace(annotation.text, f' [{index}]') + + # Gather citations based on annotation attributes + if (file_citation := getattr(annotation, 'file_citation', None)): + cited_file = client.files.retrieve(file_citation.file_id) + citations.append(f'[{index}] {file_citation.quote} from {cited_file.filename}') + elif (file_path := getattr(annotation, 'file_path', None)): + cited_file = client.files.retrieve(file_path.file_id) + citations.append(f'[{index}] Click to download {cited_file.filename}') + # Note: File download functionality not implemented above for brevity + +# Add footnotes to the end of the message before displaying to user +message_content.value += '\n' + '\n'.join(citations) +``` +Runs and Run Steps +When you have all the context you need from your user in the Thread, you can run the Thread with an Assistant of your choice. + + +```python +run = client.beta.threads.runs.create( + thread_id=thread.id, + assistant_id=assistant.id +) +``` +By default, a Run will use the model and tools configuration specified in Assistant object, but you can override most of these when creating the Run for added flexibility: + + +```python +run = client.beta.threads.runs.create( + thread_id=thread.id, + assistant_id=assistant.id, + model="gpt-4-1106-preview", + instructions="additional instructions", + tools=[{"type": "code_interpreter"}, {"type": "retrieval"}] +) +``` +Note: file_ids associated with the Assistant cannot be overridden during Run creation. You must use the modify Assistant endpoint to do this. + +Run lifecycle + +Run objects can have multiple statuses. + +Run lifecycle - diagram showing possible status transitions + +STATUS DEFINITION +queued When Runs are first created or when you complete the required_action, they are moved to a queued status. They should almost immediately move to in_progress. +in_progress While in_progress, the Assistant uses the model and tools to perform steps. You can view progress being made by the Run by examining the Run Steps. +completed The Run successfully completed! You can now view all Messages the Assistant added to the Thread, and all the steps the Run took. You can also continue the conversation by adding more user Messages to the Thread and creating another Run. +requires_action When using the Function calling tool, the Run will move to a required_action state once the model determines the names and arguments of the functions to be called. You must then run those functions and submit the outputs before the run proceeds. If the outputs are not provided before the expires_at timestamp passes (roughly 10 mins past creation), the run will move to an expired status. +expired This happens when the function calling outputs were not submitted before expires_at and the run expires. Additionally, if the runs take too long to execute and go beyond the time stated in expires_at, our systems will expire the run. +cancelling You can attempt to cancel an in_progress run using the Cancel Run endpoint. Once the attempt to cancel succeeds, status of the Run moves to cancelled. Cancellation is attempted but not guaranteed. +cancelled Run was successfully cancelled. +failed You can view the reason for the failure by looking at the last_error object in the Run. The timestamp for the failure will be recorded under failed_at. +Polling for updates + +In order to keep the status of your run up to date, you will have to periodically retrieve the Run object. You can check the status of the run each time you retrieve the object to determine what your application should do next. We plan to add support for streaming to make this simpler in the near future. + +Thread locks + +When a Run is in_progress and not in a terminal state, the Thread is locked. This means that: + +New Messages cannot be added to the Thread. +New Runs cannot be created on the Thread. +Run steps + +Run steps lifecycle - diagram showing possible status transitions + +Run step statuses have the same meaning as Run statuses. + +Most of the interesting detail in the Run Step object lives in the step_details field. There can be two types of step details: + +message_creation: This Run Step is created when the Assistant creates a Message on the Thread. +tool_calls: This Run Step is created when the Assistant calls a tool. Details around this are covered in the relevant sections of the Tools guide. +Data access guidance +Currently, assistants, threads, messages, and files created via the API are scoped to the entire organization. As such, any person with API key access to the organization is able to read or write assistants, threads, messages, and files in the organization. + +We strongly recommend the following data access controls: + +Implement authorization. Before performing reads or writes on assistants, threads, messages, and files, ensure that the end-user is authorized to do so. For example, store in your database the object IDs that the end-user has access to, and check it before fetching the object ID with the API. +Restrict API key access. Carefully consider who in your organization should have API keys and periodically audit this list. API keys enable a wide range of operations including reading and modifying sensitive information, such as messages and files. +Create separate accounts. Consider creating separate accounts / organizations for different applications in order to isolate data across multiple applications. +Limitations +During this beta, there are several known limitations we are looking to address in the coming weeks and months. We will publish a changelog on this page when we add support for additional functionality. + + + + + + + + + +## Tools Beta +Give Assistants access to OpenAI-hosted tools like Code Interpreter and Knowledge Retrieval, or build your own tools using Function calling. + +The Assistants API is in beta and we are actively working on adding more functionality. Share your feedback in our Developer Forum! +Code Interpreter +Code Interpreter allows the Assistants API to write and run Python code in a sandboxed execution environment. This tool can process files with diverse data and formatting, and generate files with data and images of graphs. Code Interpreter allows your Assistant to run code iteratively to solve challenging code and math problems. When your Assistant writes code that fails to run, it can iterate on this code by attempting to run different code until the code execution succeeds. + +Enabling Code Interpreter +Pass the code_interpreterin the tools parameter of the Assistant object to enable Code Interpreter: + +```python +assistant = client.beta.assistants.create( + instructions="You are a personal math tutor. When asked a math question, write and run code to answer the question.", + model="gpt-4-1106-preview", + tools=[{"type": "code_interpreter"}] +) +``` +The model then decides when to invoke Code Interpreter in a Run based on the nature of the user request. This behavior can be promoted by prompting in the Assistant's instructions (e.g., “write code to solve this problem”). + +Passing files to Code Interpreter +Code Interpreter can parse data from files. This is useful when you want to provide a large volume of data to the Assistant or allow your users to upload their own files for analysis. + +Files that are passed at the Assistant level are accessible by all Runs with this Assistant: + +```python +# Upload a file with an "assistants" purpose +file = client.files.create( + file=open("speech.py", "rb"), + purpose='assistants' +) + +# Create an assistant using the file ID +assistant = client.beta.assistants.create( + instructions="You are a personal math tutor. When asked a math question, write and run code to answer the question.", + model="gpt-4-1106-preview", + tools=[{"type": "code_interpreter"}], + file_ids=[file.id] +) +``` +Files can also be passed at the Thread level. These files are only accessible in the specific Thread. Upload the File using the File upload endpoint and then pass the File ID as part of the Message creation request: + +```python +thread = client.beta.threads.create( + messages=[ + { + "role": "user", + "content": "I need to solve the equation `3x + 11 = 14`. Can you help me?", + "file_ids": [file.id] + } + ] +) +``` +Files have a maximum size of 512 MB. Code Interpreter supports a variety of file formats including .csv, .pdf, .json and many more. More details on the file extensions (and their corresponding MIME-types) supported can be found in the Supported files section below. + +Reading images and files generated by Code Interpreter +Code Interpreter in the API also outputs files, such as generating image diagrams, CSVs, and PDFs. There are two types of files that are generated: + +Images +Data files (e.g. a csv file with data generated by the Assistant) +When Code Interpreter generates an image, you can look up and download this file in the file_id field of the Assistant Message response: + +```json +{ + "id": "msg_OHGpsFRGFYmz69MM1u8KYCwf", + "object": "thread.message", + "created_at": 1698964262, + "thread_id": "thread_uqorHcTs46BZhYMyPn6Mg5gW", + "role": "assistant", + "content": [ + { + "type": "image_file", + "image_file": { + "file_id": "file-WsgZPYWAauPuW4uvcgNUGcb" + } + } + ] + # ... +} +``` +The file content can then be downloaded by passing the file ID to the Files API: + +```python +content = client.files.retrieve_content(file.id) +``` +When Code Interpreter references a file path (e.g., ”Download this csv file”), file paths are listed as annotations. You can convert these annotations into links to download the file: + +```json +{ + "id": "msg_3jyIh3DgunZSNMCOORflDyih", + "object": "thread.message", + "created_at": 1699073585, + "thread_id": "thread_ZRvNTPOoYVGssUZr3G8cRRzE", + "role": "assistant", + "content": [ + { + "type": "text", + "text": { + "value": "The rows of the CSV file have been shuffled and saved to a new CSV file. You can download the shuffled CSV file from the following link:\n\n[Download Shuffled CSV File](sandbox:/mnt/data/shuffled_file.csv)", + "annotations": [ + { + "type": "file_path", + "text": "sandbox:/mnt/data/shuffled_file.csv", + "start_index": 167, + "end_index": 202, + "file_path": { + "file_id": "file-oSgJAzAnnQkVB3u7yCoE9CBe" + } + } + ... +``` +Input and output logs of Code Interpreter +By listing the steps of a Run that called Code Interpreter, you can inspect the code input and outputs logs of Code Interpreter: + +```python +run_steps = client.beta.threads.runs.steps.list( + thread_id=thread.id, + run_id=run.id +) +{ + "object": "list", + "data": [ + { + "id": "step_DQfPq3JPu8hRKW0ctAraWC9s", + "object": "assistant.run.step", + "type": "tool_calls", + "run_id": "run_kme4a442kme4a442", + "thread_id": "thread_34p0sfdas0823smfv", + "status": "completed", + "step_details": { + "type": "tool_calls", + "tool_calls": [ + { + "type": "code", + "code": { + "input": "# Calculating 2 + 2\nresult = 2 + 2\nresult", + "outputs": [ + { + "type": "logs", + "logs": "4" + } + ... + } +``` +Knowledge Retrieval +Retrieval augments the Assistant with knowledge from outside its model, such as proprietary product information or documents provided by your users. Once a file is uploaded and passed to the Assistant, OpenAI will automatically chunk your documents, index and store the embeddings, and implement vector search to retrieve relevant content to answer user queries. + +Enabling Retrieval +Pass the retrieval in the tools parameter of the Assistant to enable Retrieval: + +```python +assistant = client.beta.assistants.create( + instructions="You are a customer support chatbot. Use your knowledge base to best respond to customer queries.", + model="gpt-4-1106-preview", + tools=[{"type": "retrieval"}] +) +``` +How it works +The model then decides when to retrieve content based on the user Messages. The Assistants API automatically chooses between two retrieval techniques: + +it either passes the file content in the prompt for short documents, or +performs a vector search for longer documents +Retrieval currently optimizes for quality by adding all relevant content to the context of model calls. We plan to introduce other retrieval strategies to enable developers to choose a different tradeoff between retrieval quality and model usage cost. + +Uploading files for retrieval +Similar to Code Interpreter, files can be passed at the Assistant-level or at the Thread-level + +```python +# Upload a file with an "assistants" purpose +file = client.files.create( + file=open("knowledge.pdf", "rb"), + purpose='assistants' +) + +# Add the file to the assistant +assistant = client.beta.assistants.create( + instructions="You are a customer support chatbot. Use your knowledge base to best respond to customer queries.", + model="gpt-4-1106-preview", + tools=[{"type": "retrieval"}], + file_ids=[file.id] +) +``` +Files can also be added to a Message in a Thread. These files are only accessible within this specific thread. After having uploaded a file, you can pass the ID of this File when creating the Message: + +```python +message = client.beta.threads.messages.create( + thread_id=thread.id, + role="user", + content="I can't find in the PDF manual how to turn off this device.", + file_ids=[file.id] +) +``` +Maximum file size is 512MB. Retrieval supports a variety of file formats including .pdf, .md, .docx and many more. More details on the file extensions (and their corresponding MIME-types) supported can be found in the Supported files section below. + +Deleting files +To remove a file from the assistant, you can detach the file from the assistant: + +```python +file_deletion_status = client.beta.assistants.files.delete( + assistant_id=assistant.id, + file_id=file.id +) +``` +Detaching the file from the assistant removes the file from the retrieval index as well. + +File citations +When Code Interpreter outputs file paths in a Message, you can convert them to corresponding file downloads using the annotations field. See the Annotations section for an example of how to do this. + +```json +{ + "id": "msg_3jyIh3DgunZSNMCOORflDyih", + "object": "thread.message", + "created_at": 1699073585, + "thread_id": "thread_ZRvNTPOoYVGssUZr3G8cRRzE", + "role": "assistant", + "content": [ + { + "type": "text", + "text": { + "value": "The rows of the CSV file have been shuffled and saved to a new CSV file. You can download the shuffled CSV file from the following link:\n\n[Download Shuffled CSV File](sandbox:/mnt/data/shuffled_file.csv)", + "annotations": [ + { + "type": "file_path", + "text": "sandbox:/mnt/data/shuffled_file.csv", + "start_index": 167, + "end_index": 202, + "file_path": { + "file_id": "file-oSgJAzAnnQkVB3u7yCoE9CBe" + } + } + ] + } + } + ], + "file_ids": [ + "file-oSgJAzAnnQkVB3u7yCoE9CBe" + ], + ... + }, +``` +Function calling +Similar to the Chat Completions API, the Assistants API supports function calling. Function calling allows you to describe functions to the Assistants and have it intelligently return the functions that need to be called along with their arguments. The Assistants API will pause execution during a Run when it invokes functions, and you can supply the results of the function call back to continue the Run execution. + +Defining functions +First, define your functions when creating an Assistant: + +```python +assistant = client.beta.assistants.create( + instructions="You are a weather bot. Use the provided functions to answer questions.", + model="gpt-4-1106-preview", + tools=[{ + "type": "function", + "function": { + "name": "getCurrentWeather", + "description": "Get the weather in location", + "parameters": { + "type": "object", + "properties": { + "location": {"type": "string", "description": "The city and state e.g. San Francisco, CA"}, + "unit": {"type": "string", "enum": ["c", "f"]} + }, + "required": ["location"] + } + } + }, { + "type": "function", + "function": { + "name": "getNickname", + "description": "Get the nickname of a city", + "parameters": { + "type": "object", + "properties": { + "location": {"type": "string", "description": "The city and state e.g. San Francisco, CA"}, + }, + "required": ["location"] + } + } + }] +) +``` +Reading the functions called by the Assistant +When you initiate a Run with a user Message that triggers the function, the Run will enter a requires_action status. The model can provide multiple functions to call at once via the parallel function calling feature: + +```json +{ + "id": "run_3HV7rrQsagiqZmYynKwEdcxS", + "object": "thread.run", + "assistant_id": "asst_rEEOF3OGMan2ChvEALwTQakP", + "thread_id": "thread_dXgWKGf8Cb7md8p0wKiMDGKc", + "status": "requires_action", + "required_action": { + "type": "submit_tool_outputs", + "submit_tool_outputs": { + "tool_calls": [ + { + "tool_call_id": "call_Vt5AqcWr8QsRTNGv4cDIpsmA", + "type": "function", + "function": { + "name": "getCurrentWeather", + "arguments": "{\"location\":\"San Francisco\"}" + } + }, + { + "tool_call_id": "call_45y0df8230430n34f8saa", + "type": "function", + "function": { + "name": "getNickname", + "arguments": "{\"location\":\"Los Angeles\"}" + } + } + ] + } + }, +... +``` +Submitting functions outputs +You can then complete the Run by submitting the output from the function(s) you call. Pass the tool_call_id referenced in the required_action object above to match output to each function call. + +```python +run = client.beta.threads.runs.submit_tool_outputs( + thread_id=thread.id, + run_id=run.id, + tool_outputs=[ + { + "tool_call_id": call_ids[0], + "output": "22C", + }, + { + "tool_call_id": call_ids[1], + "output": "LA", + }, + ] +) +``` +After submitting outputs, the run will enter the queued state before it continues it’s execution. + +Supported files +For text/ MIME types, the encoding must be one of utf-8, utf-16, or ascii. + +FILE FORMAT MIME TYPE CODE INTERPRETER RETRIEVAL +.c text/x-c +.cpp text/x-c++ +.csv application/csv +.docx application/vnd.openxmlformats-officedocument.wordprocessingml.document +.html text/html +.java text/x-java +.json application/json +.md text/markdown +.pdf application/pdf +.php text/x-php +.pptx application/vnd.openxmlformats-officedocument.presentationml.presentation +.py text/x-python +.py text/x-script.python +.rb text/x-ruby +.tex text/x-tex +.txt text/plain +.css text/css +.jpeg image/jpeg +.jpg image/jpeg +.js text/javascript +.gif image/gif +.png image/png +.tar application/x-tar +.ts application/typescript +.xlsx application/vnd.openxmlformats-officedocument.spreadsheetml.sheet +.xml application/xml or "text/xml" +.zip application/zip +Was this page useful? diff --git a/agent_builder/README.md b/agent_builder/README.md new file mode 100644 index 0000000..35dbdbd --- /dev/null +++ b/agent_builder/README.md @@ -0,0 +1,43 @@ +# Agent Builder + +This preliminary experiment has to do with getting the OpenAI Assistant endpoint to create other Assistants (agents). + +## Parameters + +1. **Instructions:** This is similar to the SYSTEM window of the ChatGPT API or the Custome Instructions in ChatGPT web interface. The latest model seems incredibly responsive to personas, archetypes, and so on. This needs to be dynamically defined at instantiation based upon `context`. + +2. **Functions:** These are the "tools" that each agent will have. The first tools are the ability to create and destroy other agents. The second tools are the abilities to make more tools, depending upon `context` and `mission`. Other functions will eventually include API calls to other systems, such as cloud platforms, robots, data access points, and so on. + +3. **File Retrieval:** This will serve as each agent's internal KB (knowledge base) including knowledge about the HAAS system, morality, code base, API documentation, and so on. Basically this is the "user manual" for the agent. + +**Code Interpreter** will likely be needed by most agents, but as of right now it's not clear how it may be used. + +## Instructions + +The instructions will need to include enough `context` such as about the HAAS system overall, as well as a few other bits of information. + +1. **Archetype or Persona:** The latest models demonstrate the ability to adopt personas very well. This is further exmplified by ChatDev where the model can adopt personas such as "marketer" or "CEO". Furthermore, we can adopt mythic archetypes capable of moral reasoning, executive judgment, ethical debate, and prosecution of mission. This is critical for the SOB (supreme oversight board) as it will be tasked with making executive decisions, judgments, debating over morality and ethics, and overall steering the rest of the ship. + +2. **Context:** Context needs to include enough information about what the agent is supposed to do and what it is part of. + +3. **Mission:** Every agent must be instantiated with an individualized mission or purpose. At the highest level, the SOB's mission is to steer the rest of the swarm. At the lowest level, an agent's purpose may be something as simple as "Fix this software bug" or "send an email to Jeff Bezos" + +## Functions + +The functions are the tools given to the agent at instantiation. This includes data tools (internal ability to manipulate data), coding tools (ability to write and execute code), access tools (ability to communicate with external APIs, and probably a few other categories, but these are likely the main ones. + +1. **Internal Data Tools:** This includes the ability to perform RAG, search for and manipulate data, such as with the BSHR loop, organizing and so on. + +2. **Internal Coding Tools:** This includes the ability to write, test, execute, and read code for internal use or saving to external repositories. + +3. **External API Calls:** This includes all external tools such as calls to vendor and cloud platforms, robotic endpoints, and so on. + +## Retrieval + +This includes the list of files to be included with the agent at instantiation. There are likely some standard documents that should be included with all agents, such as a baseline HAAS document that includes basics of operation (sort of like an Employee Handbook and SOPs). Each agent should also be equipped with any unique or distinct information needed for its `mission` such as software specifications and documentation. + +1. **Agent Handbook:** This is a standard document that all agents should be equipped with. This document should define HAAS at a high level, as well as SOPs (standard operating procedures) to ensure consistency across the entire swarm. + +2. **Software Specifications:** For agents meant to build anything software related, it should get specifications such as definition of done and other critical components so it knows what to build. + +3. **Relevant Documentation:** This should be API documentation, procedural documentation, and other relevant stuff to the agent's particular `mission` and `functions`. \ No newline at end of file diff --git a/agent/definition.md b/agent_builder/agent_definition.md similarity index 100% rename from agent/definition.md rename to agent_builder/agent_definition.md diff --git a/tool_maker/README.md b/tool_maker/README.md new file mode 100644 index 0000000..1e24731 --- /dev/null +++ b/tool_maker/README.md @@ -0,0 +1,5 @@ +# Tool Maker + +This preliminary experiment has to do with getting the OpenAI Assistant endpoint to create agents that can create tools (functions). The ability to instantiate any tool from scratch will be critical to enabling a fully autonomous swarm. + +This function is as-yet undefined. \ No newline at end of file From 0e5a21dd381f12d2ed23ac5ca5be36131e75a77d Mon Sep 17 00:00:00 2001 From: David Shapiro Date: Thu, 9 Nov 2023 05:47:37 -0500 Subject: [PATCH 009/141] updating contributing and readme --- README.md | 2 +- contributing.md | 24 +++++++++++++++++++----- 2 files changed, 20 insertions(+), 6 deletions(-) diff --git a/README.md b/README.md index a8577e9..952370d 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -# Project: Hierarchical Autonomous Agent Swarm +# Hierarchical Autonomous Agent Swarm (HAAS) ## Overview diff --git a/contributing.md b/contributing.md index ec937dc..6463265 100644 --- a/contributing.md +++ b/contributing.md @@ -1,7 +1,21 @@ -# Contributing +# Contributing to Hierarchical Autonomous Agent Swarm (HAAS) -1. Start with Discussions: Talk about what you're doing, ask questions. -2. Then move to Issues: Once an issue has been discussed, create a formal issue. -3. Finally create a PR: Once an issue has been clarified, submit a PR. +Thank you for your interest in contributing to the Hierarchical Autonomous Agent Swarm (HAAS) project. This document outlines the process for contributing in a way that is efficient and aligns with the project's goals. -Unsolicited and undiscussed PRs will likely be rejected \ No newline at end of file +## Contribution Workflow + +1. **Watch the Latest YouTube Video**: Stay updated with the project's progress and priorities by watching the latest video updates from Dave, the project owner. + +2. **Discuss on the Discussions Tab**: Engage with the community by discussing ideas, suggestions, and feedback related to the latest updates on the GitHub Discussions tab. + +3. **Create an Issue**: If you identify a bug or have a feature request, create an issue on GitHub detailing your findings or suggestions. + +4. **Submit a PR**: Once you've discussed your idea and created an issue, you can fork the repository, make your changes, and submit a pull request (PR) for review. + +## Ground Rules for Commenting and Contributing + +1. **Stay on Topic**: Discussions should be relevant to the project's current scope and topics presented in the latest YouTube update. Off-topic discussions, meta commentary, or attempts to change the project's scope will be removed. Repeated violations may lead to a ban. + +2. **Adhere to the C3P0 Policy**: We follow the Collaborative Culture Community Policy: Zero Tolerance (C3P0) for harmful behavior and time-wasting. [C3P0 Policy](https://github.com/daveshap/C3P0). + +3. **PR Requirements**: All PRs must include a clear description. Limit submissions to one PR per day, ensuring it adheres to the project's style and structure. Refraining from reformatting, refactoring, or restructuring the project is crucial—non-compliant PRs will be rejected. \ No newline at end of file From 977c67eb59aecc0f860983585561906bff671987 Mon Sep 17 00:00:00 2001 From: David Shapiro Date: Thu, 9 Nov 2023 05:59:34 -0500 Subject: [PATCH 010/141] updating agent builder readme --- agent_builder/README.md | 23 +++++++++++++++++++++-- 1 file changed, 21 insertions(+), 2 deletions(-) diff --git a/agent_builder/README.md b/agent_builder/README.md index 35dbdbd..99ecd11 100644 --- a/agent_builder/README.md +++ b/agent_builder/README.md @@ -1,6 +1,8 @@ # Agent Builder -This preliminary experiment has to do with getting the OpenAI Assistant endpoint to create other Assistants (agents). +This preliminary experiment has to do with getting the OpenAI Assistant endpoint to create other Assistants (agents). This is the primary thing we need to figure out: agents that build agents in a structured, hierarchical manner. This will allow us to create agent swarms of arbitrary size and purpose. + +Given the current API, there are just a few primary parameters to work with. ## Parameters @@ -40,4 +42,21 @@ This includes the list of files to be included with the agent at instantiation. 2. **Software Specifications:** For agents meant to build anything software related, it should get specifications such as definition of done and other critical components so it knows what to build. -3. **Relevant Documentation:** This should be API documentation, procedural documentation, and other relevant stuff to the agent's particular `mission` and `functions`. \ No newline at end of file +3. **Relevant Documentation:** This should be API documentation, procedural documentation, and other relevant stuff to the agent's particular `mission` and `functions`. + + +## Chat Functions + +The primary method of interaction with the agents is via chat dialog similar to the ChatGPT API. The USER (input) could be anything from directives from other agents (like a supervisor or manager) as well as chat logs or messages from groups of agents, telemetry from various sources, and so on. The output, likewise, is something that can be recorded and "sent up the chain" + +### USER Input + +The USER, in this case, is a standin for the rest of the HAAS swarm. It could be a direct supervisor agent (manager) or something else. Here are some ideas: + +1. **Supervisor Directives:** We will need to have supervisor or manager agents telling other agents what to do. +2. **Group Chats:** As demonstrated with ChatDev and other "chatroom" style usecases of agents. +3. **Telemetry:** This can include logs and feedback from other systems to provide updated context. + +### Agent Output + +By and large, agent output will probably be consumed by other agents, message queues, and system buses. It is not yet clear how we'll structure this. It could get very noisy very fast. \ No newline at end of file From 593289674beb05ab951bfb516bf4199af66efe10 Mon Sep 17 00:00:00 2001 From: guillermo-delrio Date: Thu, 9 Nov 2023 12:03:21 +0100 Subject: [PATCH 011/141] Restructure --- agent-builder/README.md | 13 -------- agent_builder/agent_definition.md | 2 +- .../files/HAAS_Documentation.md | 0 .../files/OpenAI_Documentation.md | 0 .../instructions.md | 2 +- {agent-builder => agent_builder}/create.py | 33 ++++++++++++------- 6 files changed, 24 insertions(+), 26 deletions(-) delete mode 100644 agent-builder/README.md rename agent-builder/agents/Autonomous Swarm Agent Builder/files/README.md => agent_builder/agents/Autonomous Swarm Agent Builder/files/HAAS_Documentation.md (100%) rename {agent-builder => agent_builder}/agents/Autonomous Swarm Agent Builder/files/OpenAI_Documentation.md (100%) rename {agent-builder => agent_builder}/agents/Autonomous Swarm Agent Builder/instructions.md (89%) rename {agent-builder => agent_builder}/create.py (61%) diff --git a/agent-builder/README.md b/agent-builder/README.md deleted file mode 100644 index 21907b2..0000000 --- a/agent-builder/README.md +++ /dev/null @@ -1,13 +0,0 @@ -# Agent Builder - -## Description -This script is designed to create assistants using OpenAI's API based on a predefined folder structure. For each agent: -- Create a folder with its name, eg.: Autonomous Swarm Agent Builder -- Create a "instructions.md" file with the custom instructions for that agent. -- If you want to provide files for RAG, create a files folder and place them there. - -## Requirements -Python 3.x -Packages: -- openai -- python-dotenv \ No newline at end of file diff --git a/agent_builder/agent_definition.md b/agent_builder/agent_definition.md index 8afa539..488bc6d 100644 --- a/agent_builder/agent_definition.md +++ b/agent_builder/agent_definition.md @@ -8,7 +8,7 @@ Autonomous Swarm Agent Builder # Background info Two files have been provided for useful context: -1) README.md: contains information regarding HAAS. +1) HAAS_Documentation.md: contains information regarding HAAS. 2) OpenAI_Documentation.md: contains information regarding OpenAI agents. # Rules diff --git a/agent-builder/agents/Autonomous Swarm Agent Builder/files/README.md b/agent_builder/agents/Autonomous Swarm Agent Builder/files/HAAS_Documentation.md similarity index 100% rename from agent-builder/agents/Autonomous Swarm Agent Builder/files/README.md rename to agent_builder/agents/Autonomous Swarm Agent Builder/files/HAAS_Documentation.md diff --git a/agent-builder/agents/Autonomous Swarm Agent Builder/files/OpenAI_Documentation.md b/agent_builder/agents/Autonomous Swarm Agent Builder/files/OpenAI_Documentation.md similarity index 100% rename from agent-builder/agents/Autonomous Swarm Agent Builder/files/OpenAI_Documentation.md rename to agent_builder/agents/Autonomous Swarm Agent Builder/files/OpenAI_Documentation.md diff --git a/agent-builder/agents/Autonomous Swarm Agent Builder/instructions.md b/agent_builder/agents/Autonomous Swarm Agent Builder/instructions.md similarity index 89% rename from agent-builder/agents/Autonomous Swarm Agent Builder/instructions.md rename to agent_builder/agents/Autonomous Swarm Agent Builder/instructions.md index 4b422c4..4db8af5 100644 --- a/agent-builder/agents/Autonomous Swarm Agent Builder/instructions.md +++ b/agent_builder/agents/Autonomous Swarm Agent Builder/instructions.md @@ -3,7 +3,7 @@ # Background info Two files have been provided for useful context: -1) README.md: contains information regarding HAAS. +1) HAAS_Documentation.md: contains information regarding HAAS. 2) OpenAI_Documentation.md: contains information regarding OpenAI agents. # Rules diff --git a/agent-builder/create.py b/agent_builder/create.py similarity index 61% rename from agent-builder/create.py rename to agent_builder/create.py index 06511ad..1789c5e 100644 --- a/agent-builder/create.py +++ b/agent_builder/create.py @@ -10,20 +10,25 @@ client = OpenAI(api_key=api_key) +# Check if the 'agents' folder is empty or doesn't exist +if not os.path.exists(agents_path) or not os.path.isdir(agents_path) or not os.listdir(agents_path): + raise ValueError('The "agents" folder is missing, not a directory, or empty.') + # Iterate over each folder inside the 'agents' folder for agent_name in os.listdir(agents_path): agent_folder = os.path.join(agents_path, agent_name) if os.path.isdir(agent_folder): # Read contents from the 'instructions.md' file + instructions = '' instructions_file_path = os.path.join(agent_folder, 'instructions.md') if os.path.isfile(instructions_file_path): with open(instructions_file_path, 'r') as f: instructions = f.read() # Check for the 'files' subfolder and process its contents + files = [] files_folder = os.path.join(agent_folder, 'files') if os.path.isdir(files_folder): - files = [] for filename in os.listdir(files_folder): file_path = os.path.join(files_folder, filename) with open(file_path, 'rb') as file_data: @@ -34,15 +39,21 @@ print(agent_name) print("") print(instructions) - print("") - print(f"Files: {list(map(lambda x: x['name'], files))}") + if files: + print("") + print(f"Files: {list(map(lambda x: x['name'], files))}") + + create_params = { + "name": agent_name, + "instructions": instructions, + "model": 'gpt-4-1106-preview', + "tools": [{'type': 'code_interpreter'}, {'type': 'retrieval'}] + } + + # Only include 'file_ids' if there are files + if files: + create_params['file_ids'] = list(map(lambda x: x['id'], files)) - # Create the assistant using the uploaded file IDs - assistant = client.beta.assistants.create( - name=f'Assistant for {agent_name}', - instructions=instructions, - model='gpt-4-1106-preview', - tools=[{'type': 'code_interpreter'}, {'type': 'retrieval'}], - file_ids=list(map(lambda x: x['id'], files)) # Pass the collected file IDs - ) + # Create the assistant using the uploaded file IDs if files exist + assistant = client.beta.assistants.create(**create_params) print("***********************************************") \ No newline at end of file From dfe74b7ed3c3a09af643e673d2c97e9c7df8a586 Mon Sep 17 00:00:00 2001 From: guillermo-delrio Date: Thu, 9 Nov 2023 16:37:04 +0100 Subject: [PATCH 012/141] Start working on a direct agent connector --- agent_connector/1.py | 17 +++++++++++++++++ agent_connector/agents.yaml | 8 ++++++++ 2 files changed, 25 insertions(+) create mode 100644 agent_connector/1.py create mode 100644 agent_connector/agents.yaml diff --git a/agent_connector/1.py b/agent_connector/1.py new file mode 100644 index 0000000..696e355 --- /dev/null +++ b/agent_connector/1.py @@ -0,0 +1,17 @@ +import yaml + +def load_agents_yaml(file_path): + with open(file_path, 'r') as stream: + agents = yaml.safe_load(stream) + return agents + +# Assuming agents.yaml is in the same directory as this script +file_path = 'agents.yaml' +agents = load_agents_yaml(file_path) + +for agent in agents: + print(f"Name: {agent['name']}") + print(f"Id: {agent['id']}") + if 'talksTo' in agent: + print(f"Talks to: {agent['talksTo']}") + print("") \ No newline at end of file diff --git a/agent_connector/agents.yaml b/agent_connector/agents.yaml new file mode 100644 index 0000000..dac532f --- /dev/null +++ b/agent_connector/agents.yaml @@ -0,0 +1,8 @@ +- name: "Agent 1" + id: "asst_OswAEoT5NnteGEzNIP9UOa7S" + talksTo: ["Agent 2", "Agent 3"] +- name: "Agent 2" + id: "asst_utnfDavVtGWkjFD3BEqXGR2O" + talksTo: ["Agent 3"] +- name: "Agent 3" + id: "asst_wFxS2rVmN9rjKDy7znaeNo3T" \ No newline at end of file From 3ad16d49b76115698531ddd8c4ee45caee19f0ba Mon Sep 17 00:00:00 2001 From: guillermo-delrio Date: Fri, 10 Nov 2023 00:12:43 +0100 Subject: [PATCH 013/141] Fix queue system --- agent_connector/1.py | 17 ------- agent_connector/README.md | 3 ++ agent_connector/agents.yaml | 10 ++-- agent_connector/connect.py | 92 +++++++++++++++++++++++++++++++++++++ 4 files changed, 100 insertions(+), 22 deletions(-) delete mode 100644 agent_connector/1.py create mode 100644 agent_connector/README.md create mode 100644 agent_connector/connect.py diff --git a/agent_connector/1.py b/agent_connector/1.py deleted file mode 100644 index 696e355..0000000 --- a/agent_connector/1.py +++ /dev/null @@ -1,17 +0,0 @@ -import yaml - -def load_agents_yaml(file_path): - with open(file_path, 'r') as stream: - agents = yaml.safe_load(stream) - return agents - -# Assuming agents.yaml is in the same directory as this script -file_path = 'agents.yaml' -agents = load_agents_yaml(file_path) - -for agent in agents: - print(f"Name: {agent['name']}") - print(f"Id: {agent['id']}") - if 'talksTo' in agent: - print(f"Talks to: {agent['talksTo']}") - print("") \ No newline at end of file diff --git a/agent_connector/README.md b/agent_connector/README.md new file mode 100644 index 0000000..fb2be81 --- /dev/null +++ b/agent_connector/README.md @@ -0,0 +1,3 @@ +Provide the agents topology on the agents.yaml file. +After running connect.py, a thread will be created for each agent. +Internally queues handle the messages that go to different agents. \ No newline at end of file diff --git a/agent_connector/agents.yaml b/agent_connector/agents.yaml index dac532f..ebc18fa 100644 --- a/agent_connector/agents.yaml +++ b/agent_connector/agents.yaml @@ -1,8 +1,8 @@ -- name: "Agent 1" +- name: "Upper case" id: "asst_OswAEoT5NnteGEzNIP9UOa7S" - talksTo: ["Agent 2", "Agent 3"] -- name: "Agent 2" + talksTo: ["Lower case", "Random case"] +- name: "Lower case" id: "asst_utnfDavVtGWkjFD3BEqXGR2O" - talksTo: ["Agent 3"] -- name: "Agent 3" + talksTo: ["Random case"] +- name: "Random case" id: "asst_wFxS2rVmN9rjKDy7znaeNo3T" \ No newline at end of file diff --git a/agent_connector/connect.py b/agent_connector/connect.py new file mode 100644 index 0000000..e7df655 --- /dev/null +++ b/agent_connector/connect.py @@ -0,0 +1,92 @@ +import yaml +from openai import OpenAI +import os +import dotenv +dotenv.load_dotenv() +import queue as queueModule +import time +import threading + +agents_path = 'agents' +api_key = os.getenv('OPENAI_API_KEY') +if api_key is None: + raise ValueError('The OPENAI_API_KEY environment variable is not set.') + +client = OpenAI(api_key=api_key) + +# Get the directory name of the current script +script_dir = os.path.dirname(os.path.abspath(__file__)) + +# Construct the absolute path to the agents.yaml file +yaml_file_path = os.path.join(script_dir, 'agents.yaml') + +with open(yaml_file_path, 'r') as stream: + agents = yaml.safe_load(stream) + +messageQueue = [] + +def handleThreadForAgent(agent): + messages = [] + + print(f"[{agent['name']}] Id: {agent['id']}") + if 'talksTo' in agent: + print(f"[{agent['name']}] Talks to: {agent['talksTo']}") + + thread = client.beta.threads.create() + print(f"[{agent['name']}] Thread {thread.id}") + + print("") + queue = queues[agent['name']] + waitingForMessages = True + while True: + if waitingForMessages: + message = queue.get(block=True) + if message is not None: + waitingForMessages = False + # print(f"[{agent['name']}] Recieved: {message}") + messages.append(message) + client.beta.threads.messages.create( + thread_id=thread.id, + content=message, + role='user' + ) + + run = client.beta.threads.runs.create( + thread_id=thread.id, + assistant_id=agent['id'] + ) + + else: + run = client.beta.threads.runs.retrieve(thread_id=thread.id, run_id=run.id) + if run.status == 'completed': + waitingForMessages = True + + message_list = client.beta.threads.messages.list( + thread_id=thread.id + ) + retrievedMessages = [] + for datum in message_list.data: + for content in datum.content: + retrievedMessages.append(content.text.value) + retrievedMessages.reverse() + + i = len(messages) + while i < len(retrievedMessages): + retrievedMessage=retrievedMessages[i] + messages.append(retrievedMessage) + print(f"[{agent['name']}] Message: {retrievedMessage}") + if 'talksTo' in agent: + for downstreamAgent in agent['talksTo']: + print(f"[{agent['name']}] Sending message to {downstreamAgent}") + queues[downstreamAgent].put(retrievedMessage) + print("") + i+=1 + time.sleep(1) + +queues = {} + +for agent in agents: + queues[agent['name']] = queueModule.Queue() + threading.Thread(target=handleThreadForAgent, args=(agent,)).start() + +queues['Upper case'].put("aaaaa") \ No newline at end of file From cefeab437ac721a523fccf494695de522eb4531c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?G=C3=B6khan=20Mete=20ERT=C3=9CRK?= <92143124+gokhanmeteerturk@users.noreply.github.com> Date: Fri, 10 Nov 2023 18:28:04 +0300 Subject: [PATCH 014/141] Fix typo in agent_definition.md Fixed typo for README.md file name from agent definition --- agent_builder/agent_definition.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/agent_builder/agent_definition.md b/agent_builder/agent_definition.md index 488bc6d..1da7a94 100644 --- a/agent_builder/agent_definition.md +++ b/agent_builder/agent_definition.md @@ -28,5 +28,5 @@ gpt-4-1106-preview - Retrieval # Files -- READE.md -- OpenAI_Documentation.md \ No newline at end of file +- README.md +- OpenAI_Documentation.md From 02eaeaacf1a2c37aec1db0ddf6ab99013728d265 Mon Sep 17 00:00:00 2001 From: Oliver Wiggins-Hay Date: Fri, 10 Nov 2023 15:35:08 +0000 Subject: [PATCH 015/141] tool-making-assistants Updates tool-maker readme Demo images Demo py file included --- tool_maker/DEMO.png | Bin 0 -> 21975 bytes tool_maker/OpenAI_Tools.PNG | Bin 0 -> 19260 bytes tool_maker/README.md | 57 +++++++++++- tool_maker/flow.png | Bin 0 -> 24653 bytes tool_maker/tool_maker_demo.py | 169 ++++++++++++++++++++++++++++++++++ 5 files changed, 224 insertions(+), 2 deletions(-) create mode 100644 tool_maker/DEMO.png create mode 100644 tool_maker/OpenAI_Tools.PNG create mode 100644 tool_maker/flow.png create mode 100644 tool_maker/tool_maker_demo.py diff --git a/tool_maker/DEMO.png b/tool_maker/DEMO.png new file mode 100644 index 0000000000000000000000000000000000000000..7804a48ec40b1a15a51b12bdbbd2e58873ba65e9 GIT binary patch literal 21975 zcmeFZc~n#P`z~tRpS6XmZDmqWoB^3cWQNdcaTXMm8KO`?gb*PzX3{F6gu#*7q)wnR z#4rQ`A#or?84>{@K!9KZL<|r@AOj)E*}=BwoZI{T-TT*F>#lWJixslj`!l?s>3yH) zdH0PIo(ETJZO~e=V#Vsihko;3vEm!fiWOg(eD^JI=sU zT-WMZ@n*}9-kUSmoEiT>Zw#n^emgeyn>SJ((l_$kTa8frc--D&<)Mp3${6ha}Dqu}KBuhCq7 zN%b(JgkKI$8@ei%{Q2csgno?-HB{^V&GK^rcW6?zDlS*8LJ8ePOn0;tJpHU;_QMjD zLsc+=t20!&e`2Dv-n#;YsVx~sZbh^ zq33O7C=0pp4)q&aA{Z}e`=frrT8AUNb#&Qyqu@vSoLq(Xik7VjaxrfL`>-_kL46~Ji2{62--ALhffB9Gi@8mQiuZ&DX-q)hohdA)UQTwHsn2LSoSc5id{6LK zqFME7eum-ViQ2pF8LuB$L1^)5`u42EVV@9%%GEGJuvGFm(z6 zRxYF(xiC(olo3ZhJ&SY4m8$ouj#+%u=N|l)caOhNk%(vEH?hyxGz52+_|+unp?opI z2=rA}eiMk=x(E_D^@`WcRnIgzhAth%psY1*YsmCR;2kDswtYDIQ90`{MUwe4!P=MM zekmD(ViQQ!ib9lDQK#-t2R??k+*W;jwXxl!7$nWHem-cD>1)BWFW=l@(>5Ja^;pI| zI6m#4Db@|MXiZ7XY3=jBDqs;RiQIw~DNfb(*z^pYNwY;Bwbtpb_M=pt?l;DlnZmm>C`( zZ@rdJfkBG@NcBgQ+1*ayUPS7$^n&4yk#J9qA*P-H(;z$5^gHig{fYIoYcXPa+tx#- znG^oJI(}Z(YkJlWy}_gLW3BVXq-1{m%EuG$xlPcR?956lKW~JnVB^tv%Dya`0dqvV zwNYD8p987WrAKyjBKd=jMzLlTp8OH3YbjW<^fgeKBXfoexBLNNGYdW)`#2_K%S~L` zHDbI?Pj#y~GY078dW@k^l$@)k5}I1wzQW}MSy%|jS7Dz^-o>}#vcuZz4b{6(NOWsG z{MHw5Y`<=Vu?tNNZ4Z<-T8^16MAy+Mku9RO;I|beb}&m8(nk~`S%>QD`<)_-3)8IQ z3%dxiJm@Iw*PB{JFLj?EA>7hRU)c6(k~z@PNd;jMsGGdw^n{al*$RKI=+tkmTVop! z{e%&E?Amnyi7@Dnts5gr1+nOK%>=c_BosI8)Oo5D#aq~$=_LmnG9#Io8K%lPrc9MO~~y9Bb7c~#P1Zewt+s-8>0iubzMpZHZ#hJ zyJ~knPVvZblh$L&_rHgn=_S@Z`Pzc!oy*UpEg;mjjmoDoBA0D7^}0VzG|jljtDT$7 z!g9~2zGJnBqQY!sh$?Y5uk|6NEwLwR*q32aQl`uC#VJc-Qhy+A)R<)?-5bHwkI6Z; zJs`b@XkVV9ot_xS)~kVqw=?W?v(d;FTQOG2z0NdG;c z+8L$wUMENVYOMEIkboRwGX^V zxoKWkNqq!=r{Wvsf+(A|^%{GTwUzo+mfk?q^(?cvuGh@i#4A$=J*2PRG zCW=yi7d0PJ_m5hnb1Dm)PS2i~|4?M4gUn-%*(V<_2}(v7VOmi|L-ZPKapq@(gT1*$ zw&h-DXF`@boz{sp6YY}UZj7+r#7!=tuUS-62!_VrYqrHpQ#Baq$H|K+jTLCJA?u5q zqq!*bP!nKU&=9fBLf>U$cImSJb7_i)S@tPbjBNamURCTt(J(q={z01%gPQcaMgx0F z;A+KBZ1u)I>x1kiOwWe}#TWuh=DYXxrpj4##a2TO|4qGri?|s4Lv$U|5Y@HgqOCB> zMCsOv96owqv`M~R*aV(WmW{9sq*K;gfOQ#mchgZt;Tvj4d9Q4u1 z_*mWW%E2ZX;5i76zx&PNkCPUQDcd@oPqFME7;>vG`%(ky0O*OhJp8!DaNwbe#P5_h z;P0{(4RleS06KF#TAunYh~gO($EFdGousv^zRgVnfRf4%z~YLJ?i;@n}2paKXGrPCzhi!RGbVg zlAk8*g{&jhO1(2Zu<%5PL!K`G>Jr{!!%$h> z0f~)eUVLbg;IY495-xhp;3CA>`dn3z;uvon##(AHGM5(c>WruKMRxYTlB(LG3)f99 zeeTC18e?i%{S?H6fQVOgNvDd2Y02;&EaRD)n^z_5Y)&Z>W9%*SUYL0Yd8xGR+If$| zrx9G4E#U`$G^t(V2%%= z34Ja98SWax6t*Y2gu0b|TxuZ_nTyUCP{#WViSr)Ls%-lhlrN~x9z9f<+KzK6xDtO1 zD&T-WJFP#E8oMxd~wr=T9CaN7L^YZgs(xb;bq_s3QYDSRPrry znnPKwrG%;&#dW?F>?Z%`!FDItr1`8S00w)V}Q z0%fdC+Uj{&wD?Cl?ncDRz*VW_IoDTJtRty8FLKV~9^`h#1cH*5;KOgRZq#H~o7~V( z`#TDmuybObp0uDlEfNwO?|BqQ5lR-uuM;9_SPSAYtQ%NbvLBR${U%`@RJG&fj&mtK z8|A+f)wj=NJH*xKjqDgWFZ|h(rteNXOZP6p{}^u5%3j1<##p>w5CEHW{IMNX;GN@- zKobcCp%!pyH$&+?f1e*rz1c444n7#)NO@j>F;Wn+5dp-ReARK=4l`yCqmI0`B3Q_5 zPeai+;9c=hcD?{9h^vPb?>~&Y(i?T|4)~34aWi1+3{s9$1)D#&bz{xo=kPesM}`~e zZ5T@OL8qWx4+wcKrlbETlL9u}7n>U~6?Xh_Vh|rxk8*QiRSxlCyc?_lF$_WwmhcLd zztbntarAlO^piaD+KGhHP3w3io5y>zB8DQT^SfgHEasRgoHf0x5xZdFhcj#3BNv<; zq=krD{@a*~3s{ajR=7XdI-LAaXe4NXzwJQXViejmcm}c;3yQ6!Q*fgogkp34%b+)7 z2e}9ONxMO>j?Wp+VeHzGwD9f&&WhuK>!A&@?!bzeQ()?{4|$5a7!d|)vl(VaQ@ocL z7p$j;tUEE*9Rn#_=nV(sbLRMiJ$5~*q1Tn3Q%pOxDh1I~^0v-(?KYg(ghyoM^juVQ z#|L6RR%1J2_QdYZ%vYa#(v0^GekD zH9ixs8$IG_h}$=^irL)S;usZ+Y7T;2QC1r?f7>X(Uyc99MlakIZas~x9D^r&n$idD0wFcH9cj$Jl=6!DT&vWi45xEkKY(w$x22U_MC0pS{Bf8-FK-)}$ZIfsqRyy8zFL;tnudxxT0%oO zG^^6)Kn0kHl7k=ZAh+huw^mnH+NG_&D7IxG+ZPvF{If(;3&aT!3Vka#0LkcVcDK)o z5W|PBcEVNfMAZqD)|9yHd+?413Mml8s`kWeELyq;H>2$4hrW$)Pbv{xkL@IW z4J)2cU=B$J8T_aEdTXZepTg%9KX=0PUz51<^xyg62!jbK5g_PvVl5A<2u}^At=luGxxUi$V z*aGANle6JpgIq>}L)&0)m^@Vk%CmR` zKOJ5d>A9^KEKDj2Y zi`%LV+4lH*+Tr1%^sew5Xy5SYFv~>?{5!}@(d@oYxw+wl!-NLjU7=UEc&P_G({x44q{Ueev79gw>pYCQ5(pEWP8-^;?YJ}YRn>P@)` zv4;Fliq&)pS-*q&>fV%Si7D_O#|n9-;)gvvDYO51JXr-PSgDV)v)I z=GtBF8IXg(3zI*vy*VH4PBK!j<>|wP=bdUKWeL-RrBWvZb>xEW3GHx+c&`eyVd_^* zUW{AR7D_XIo6;GAieK%$i3h~F-w~$-yHph9c~?A%XyAoy`tc|%1Wh(#z7Q|ORYIDr z2g#pc)|yUlfDhLTYkte5ZFC4d_|4#ufOmzeE(tqkhYONaty6pmKP-Mf*DZyQVD+mC zH{hwIMa6lxyiGk;l>4txU7_LIxQVZq8g0T|Ax2(fJ|T}b%8lp`>IG=3c4{;q7nU7= z8j}61X!R&=l%rGR5P*)I$TpV!UgO{hc-Q=gvTrGuXRS*2=pl-)(L`%#0?TMN_> zrx0};KJ^CYAl-U+;LHnRlPjl8rGzIH9~#Et$l!fB?ZTtvYeb?=e%C_01{W2E`y&J^ zvrivOFPLxpXBtHLU}j1>{0h_ww{A>wT!8B09nEyGmog_^i#!x^4#w; z8kQDlgwu$x{pymv#@hfd_73BQzm6GrydjlxRwYO|?RuQOb!_Ejcx$NMP#8^3#Rg3U?>o!rN@I0=hrij`&41o%yYz6{!<7 zW4`H5=A*h3#4sS}ei|VQn=!)SekUgTdPGvpd7dkd~YU*WsI@tK33rkeu(9haWrl#NC!7P&uuT_Y^c%HQCB9j>U&*>4~GP1bQ9gE zM2MvZ&vN98S@?m1AD8D~gq1agrCM@QXJfMG#ZUeBVE5)BS3k{iO-c@`@ zcr1DyEby-=>1^p;s4*BnHjZhC7;i#ih{G){@F4Rv!!nLa+~gJ4nClrN5%<1%^^ zeCx8WD}3GtOu69JNsr~5LpHX-0zYm@#6=5c!i;q|X+~*Hl;m{sMX~WDU&E^)ieZ^h z*YRQT4EN~DKR>Mb@MVhtAf8JX|}xZiC-VdWN~kgs|X$C~Tpj~bj+UEz2^ z-?^##VX60=S1oTKvdGB4fOzb||A>Mq_?g0TbU>lq2vvpYkzSdAEqwEt1_sQ$rCNAbPz=-6& zw;#pGv@xtnxm>sky)-POhKNGwMq2acEMP}mv7+ivJRjV8?#)XGUb=2&3$ zysknDf7bD2cJ1n6Cr#P2Yjf4a4f$|4(#o(>EQlHfg9x+)NmZ)vNrqGlnPw{; z6%POqx`DjxZti%LUxz*IimBjFxQHk=&jba?<5tr%;zxZ%qB^ynLi zHNbw^`x#w>?RSTq_0ziy`So0gpB^Uh<7BQwdW3USvW-Byib}~ta<7rV8+@+TeQ}9TidaDj2uW>z_JgvUVcgGuttYJ0S z6F7BL!YTi6MdsC!t%j{zC&hb$Qp#SZE-oFy0Y&D02Y(%{HFFU$WUMI?v8hf53I+Es z^Lw+a|8-zCz4(jibNQQC`Zw;o#+p(tyy?kV>(-^$A{+~)ezT|y;AF(1{TlyV+BB_3 zsb}1uO^fGli!c&RoujoT9Ljq=?b!0^VvKNiGU?EhFOZm*B$6iOJ5>PlMXgjxWle`6 z#@SzpMGdrTa53P}wLi{7)?weqSyGgQ58&R3)&%2MIbP`I>O+>fCa(_?9ds3aOGU)= zqWtHOm;6R+N-y~k-;ZA8UpfIDcb@mt>k>FO#v4l=M-r5s%KfkB#JB4a_lc;ql1XFh zHls07vEb^E#uzV~-UE~T?LCiJ2@I`6-~ZDGy+ZHH&3H@Q0JFR_SHC|i`HC|7v`^nO zt=^7$o-E@R+8+<__bp0MkgulRpv}&~He$#J+{{@Yqv~WD2;*Nz+ET_%(DES6Qton_ zto}Igc;DMyImEj49$>#qaTs`?Gj_J?g5TyvQ3b%x97($!ax=l#7zXoNS=D~DCtrY; zsk4Ge_Jf%eafyK2M7HWo)G4#3wEq>MFZaUbWTue#6w+eS!Nz`=ff(vv705wW@@VDG z!KI3?+2_r3o&j^Jw_t{U>gf@Yi_jbKSFDN=r*vM#YRVX0*W$RBn)p@Hv7aJ4@D@pr>VByOh@0nbwB%ccEK1IfmOb>gx-KHnYkn@bgbAibYDRt5 z^Mjht*7prNtD|+luHcB5b@A&HR(jIf^&Xs4x7KreTLl5)a`wKQwlJRt@-f1l_L_oIY!|JAgC2jq=P>O!a!Hg7)vBG_ zAiQKx+Wf=HKzK;VRp1b;lv_?e`;>)jgs}i){zQ-w=Wa8)CRkvX92#R`aqUkG?^upE z`!`Xg#v!HE-^`C&=Due^9bCoqVJ~2|eWRj&!E4vd4bbXmYaLzd`rIkY#P7>qP?Icl z-q;0Q|Jns*5e%E8KjE0^X&NWnWDNd7n9R=nYjOR zfozAzXuSM1*Ln=Y8L+}>T(C21NdWQ2=ne;X%dahUwg&Od^iSw;;I-6us>z0>0NR&irJ!}!4cS3cZ3(N1nd77&ifGPo&Fj&m6*j< zY)83a<@1+2^kqQc2NHHy(ahWdON5XoUmh!b1w~?ApRpjS7spUxSe*xuGi2$4x zcRMxW)`r1)_pbAQfraScu8)g}4)eKiavX_3rI|o{yk^>D=`mrKZ2>%`piT9&yV^Ek zqAH%2^?^?*aa$L)%c$K|v+C@}>|i#!w4p6IpKA&^jrl|IknF1hJ>J(BC(R{7SjbG! zM`P1Qm`<;1TEaOg(3iHXi`Otyxbw-P`?#=P{s*tx$ASpeM39B^e&h37)iGQ;_l z+687$o4KH8E)Xs_jvX;vWJd2XLW8ziiUN6eC?`qWMjBh zo97sB2O5}_P6CC6g1{@WHY)BfO}8`W8)i1_D9!dLCLidU02McMFVLLd0tg&G*BU)? zJ0G*rSKe4@8&xAWfB5KB}$+6CBR*Xk{u-`o66;9Qh=@p6Z8)9madW-8=!~gIy#y{ub^2y_Z+U9aPDO#vS zs#y?gtWD{BK&ubwU%!%;2E(a1>{)(dEcnx7;)p53)zi6e9@mPyK^x`1)zGnF-Sw3= zACP^Jam4G?KoMkNfJw@u-(=(G08c>RT&&-8oCoWk=?rUr4)7HZ3WjUeG+w}xnXj4L z6jj}igG&vRACp8%Eo2`dApRO9p~={KW*B4~^mj`P-Q+}eh2w$vB-#}n@RDh@Mw0MS zfA{zucAb}>ksXg@Fw}G80xrb?8fu=zO9lrJl zM+8UoE-d=SE356<{#l?NGU8X=-1_+GQ1+w4Vt;nsfJRp-3 zE%l4y3m%viyeY}%y0p4w!tT1WuvG&T=+))fVlB4uqEs46PnLw(?4arA4t%Mp?KR|~ zr8K-HY+>Mg^`(=oL^xz^FFm3YnmwSYW6%eoAMJOm8e^32NqLLS*-Wk0AQcQOQzv9;e=0eEo2+=s(*LN*{VbfUq&SN z^I||&Lg}2x(4HA=jLX6hi=r&4Ap1iI)|du&TS)V;1`&N1kvg51XqqkErUFkqL9?jDnG8ASyO}bUrkKz5xP*K=^tKTx+%VP8sg9$GVO?%+R?MR&=>$DnF zGDp4a`*;v`Vdzva&G9+vLrDNq3nt`38HEMln^ip+mgVO}_D8ReW8u=+4fHsIgU6|q(et3vk-?>t!5h_|2&}}Iy%J9X z`N;immN|HYZOs?weg*R%Pvr0YPyqq5YzA13>24EpR6?mxwXFnpL)K1_ zbxw74K#feI*tcwf_H{gv#VLV3BENXDI(IgRV~nn!Bd?()@A|E3v9eDR2oBO|Rzshn zsa(bV1fsj%ax*j!kxc5TR5VrK_p7A?Ylf-A30E0g9Y}R%P*0LWB1&ZxJpmAiU4lN; z<4i`*^Hoi#EJsd5hM|0cN9#@2bAZIyOyo4>ZlRlaoG$j+1g`M9eSlTRt8?fQH4X?* zC()!l%KUn|ziO$01nNHcv{>}qv7O*U9$&%;N~!z+jpA(JW{m+R_|jEaS*E@EmN>zv zj5;DTbDuC}cr4gRP+q=V*XI%|Ca_seB6k|n9_BjyS^z)ivluL`a-npL>w#z=2gS~B z#XT_lwxgf?D87;7vE)_KiJBX>jI*_qR~E-L(<|on>T{=CXo7uR?}KY{Sm64RKedcueQ^(?P9v?;rEfUj@wq{6 z=;l`dUL!S)l9LIfV%WB*0~RVOK=~a+&TNd{*y-O2=?l#1Yi#cwLHc5s>C1B1hdmQ< zov-R>ZoO5wz3$KOD28t+md!fP;L3)LvPf)smW?4l8vbr0eqvizy<1d7P;=iSN8nVL zC0#Ri5e_bOmcgI*n8uw@u8wiVD&9j?G(*X$6aEg>ud`olkUmD-3C`sVo-1ev{YkN4(N>e<}^KOCan(!~t)>6tZm1p@pQp;=Mk`WLy? zQ_y6W@z9daTlCI^ZplM26q4=>VC$9=5=gnA+%$w^zRb=Ki>3EUZYPmz&7uepHx3<4 z1^a9K+)>!H1;qqI@c->9>T0n%#dccFmaNashN=XD$hbVqDxqre$Ng?N1S-{MdT@!b zr|P(qd4NU{M;y1%_FzEXH5>Mz$42hE2OPU0I7h1{RN4g!to-28fYg8Fa6^?&4eHjq zHc^9}9Twpb!)6Y=ZJ$AId@Cpu{u(pG4*LZ58w}krZ4>3g$is!;o;47)r$8F#6j3!z zOC}Ra%#XY#J-|t>)shLZ&^?ucHp77nA?L;PrsCcUj9~JE+{HALmiIaOnYka5*TcxK zLbZ$E+Z*nW%=W{wsuLn<+*ZK5;Hjf#s^Yzj%XoU)51g$(mN-WxltjBNyl(qTO?MSB zv0w>o`0H;0f)R73U=rrAc#)KCGseQZfOk>G{TH2rQoo1PIGvw#>XD495Pge+F@H}} zii7eBZA1=?{OHDMQ_B66p6Se|rX_4+ro)p#yJyS`;gum}dLGXTFkf&K_hMI}(G_Et zY(D0y&QV7baPdSj$lJMydd%1h)H+WxM|$dT641Vk1F3|?jkgl?a-YF3c!AR8iLAo}f8mvVI6>lrr&dh1#mfEU2aw%WEN^g^SOsb9&Zs%^xRi1w)xpsp1!6)81bMfMV~{aY_-ZPt1> z{Z^yMy5w~h5E;>{+V`0tqNIa6U2hEsA{61zsLwmI$3b%&ajLY4lfj$3+H$YcC6zP^ z4jzP%a!&srP2T^}C zZp>*kcOGxe)n9)%E$ErEbbzRCz!mw2sr~ZE_jM%04VHNxVjWH=4GcC4qWPCgtW*gV zv^4SFpD<^1FrbnDIy>{=_+-z0bpw4NXd(fICS^#v8U;><<2&Zf!R9py_NX`g#mt=y z$BeGUCn;^>?$8ljP2?J^i+f1WCS&2N65Knx=Lt!#QbOV$+b1Iw0_!g_yG1*5-brw{ zFZ&u0IYsmN^ccRY&Htja3v2eZ|4%x*)Z=(OfwbV$IrHi~*$w}wD{*c~!Wqq;>+Zm3 zK8Y@V`=mMF+hT60Jf?|I;ke44IugP@14&ngF_C-SQ-{mGIFK_&N0AaeeB$0W=;pOqmcJKaQi>)M+pN|vCUKiNZ(Qg;ygr^lE$=$-84DD;+HHhlM{y!JZ0e)*aQzp(qY*J$*=n1XxvFjho$ejIDr1~S%R^X7f zu}~ZYb|K*Eo7}a>ocf(=!k!H_qNYTp6IOMr6O_l-$oo@APRx{6$n%Y)-|j6wqmJxV z9ldEc(oo59EhLkwXVSLh-nmV`idbu;m zO_jk-q`Qy>kO3|Fr1;rbM@|Hpn7}vyV<}4BEjTz4Ws-NnKMxfl2$_cOf~UF;JY@$C zYVUamNas2y=KHeaXn^$a1lYeVMRLs!@ZBGzJ{Z$~=8e!M%)>xX>UAd=a#4`qFnoNb zQUA02G=Hslz04ukAl#*jm|x3#a@+|60ay&x;(^{wAN&5MZf9N5CY#W}Rq+;40S+~g zy%}GnuJOh7DKK4;9b2Z4n&Tp?hE+3?x!}->;2V(m<|Y5Qp=Yi^PTH__Ql@E?H~gpE zetcbi8_DB*Y;n5LDS7qCon&E*3oDHw$%zUEOsZtnTZ-(3ED3zh(Nk2nwUZsq^LeWn zC9w(&_*tF!*{B?@ObE%&%?wZYa%KxoAdwMJ8oz$N*+M?{r51P1q-L zlkWS8l>H$DnpF=t?}I;(@F~asQU>@vwYG?Gb)mN|d+`DTU zEoei>>UY?)l7dD?p>ZWUJT>0@cW6RhcnmzcmR@q)DF~H>%!9_IsgWgnV#t(S(8%xV zlxXWh`0F9-B*w%&*b9J&f3auKemd10x39P%pFbBiek874*{C$-KZqFbNU1DvG8B3d zIN6HWi9}bE|3*kxhM-o(nYsjVEhd^K7g%?suYxUju3GqLjAnX5Ae9;cCyq1|)Q30G zm=a&FnV&D_QeVUK*0--}8@EQQy5Xc3BHqI57MR;(?uw4AyPnYaxmXx0NOFtmW1CKaqsHgaJa3IWQt>6TD{ns(p)HGv>&*KzWIvUbxp3I6%9(vuJ<$dh`dP zTxpAo8b1ON3-zh%(!C?ckD)95H5PP_)aM~^TPAiA_cE@LR&^Hz3Xj4BCS5Q<6P{JC zO#D4Wv&N%^EE0Ysk9?PZ4M$hDFO}SF-rF|dnA=%$062@L9hJ!`nOXO`e$%5C1e(1mm?KfsA0>bmG~p zLdPAFlm$=%I=+<1Z#Bo4!+x*(NB-vaZ~2=l;4k@G-Kk~yo5DKEGOAB?sw#vH$bKH~ zzzSQRhOv>4er%oMN0Y5rOaAm|ufx{-$QfbXLfyBgHhlY8QFpeTQpFXr%UZ=nF&NPp zNt>{jwLotd{V-$=F@PS!DO679kbU&r6lQg}6N>Db+v$x-nb3ZK<3%`(OX%cuVcKa9o|RcEwBg&-Z7T4St9i9nQ;)N%tcy#b#AC zGgNDN3ZQPY9hp>R!D&*D|7X%5QWTZgBXeN+Np>?hRfnpXMcxirO#R8&phN zF{P7|M-sQozJH0_7KLZXt?-$ACtD|;+wV=~u+r&#gM$~9F#rv@*-(#@_G|dCQHCM5 zJnrI@BPl*_3e0ZC3dIdv_Mk>17pe8mmdr;L+TN*&k+u+<5mib;u+n+MbiQA+hO@0M z!$1ikiwV+o%{`r;%lemjgfd-msoMpSip(h2%90LQ!$R-b4!bntkPNus9F@#*{szc= z1@b!}KT?K58s#FeYzB9#LBlG@YFlEe@SvSWTrIExF1M1~ll#0GzSTRY&L8*wSj)ph z@?G>!@oF)cUe1jk)Y~HTWe_FnCw8|DEEGAGn?wYL9)NtqRqd5vE}{}U0#{`>9biX2 zaE~7zZpeijK}o6!)t0kg@;%s1X(h#ys+&zmzZb52a6I+6`z_lUVP#@0 zy5|01+9O9X;N2}la(Y)V+n5eHhR{Ed><#;eza)p5@H=MydnMh;7ypZrF6`r^hKu+v z{@+}L^<#?u2XO#Q^gH7=Xw`J$J`hXH?vI@rlmw@)-Us+iHpH2wk#mqI{@w8q>SGwT ztx$dO2x@f}*GN139^MRy>&u(FmQ&Eeb)u4aI|1(~GZyfUtY;l^$5($EENkk50xCQU z{t@HL3^5_BOA)MHN^E`shj0cf}s3Uzw`@UY!YNi$^E{XbOZg{G-NRqG~9s%mOWCl~5%qawgc?2A6A5c^$A^AzBtB*S)3fC6i}1~UmQ zyg}4qek?O!z!$$@%CI?mAHXK8BwqoUDL#FMw1MIOI^ytp^1yHwpgI4u^`j!m3$fwo z58cOOH0f!dY&+XIo9R4Aa1%>%uG_UwZuSr1azhZ3Gl}v{&EqByDtiayX{M4mY|Y6- zt^Y{-@oa}y_~nJ7k;0k;^gNtPVwHLXYqQFzs2d8Ae}n&!k;a#CfW7yRDWBG!PKowm zgt5~oxm4BFE4uuG7Ytxw=qn0LIDN-^Oc^sfPyA2WpTEQFeHz(c#GB5xSu(b(hI{ zRwGZC%U*%jY_TD|veHHhv6%;2hbdH6n?YB z3p6i+b61YbHayH38L~*8?md16F82LzC4}0Te@O`Qem(oJvff1nr&^LM@Wa(>o?N(3 zW@aTOyVHmG%89uxy0@2=>G*}6#9RBNM+0M`r(a{=#_Ptco0lb%f2z}qEeF!yhx8}I|3?T}o><3Kj2S}Pi9~+8NY*9f_ zMpkk49CLE;^MRi4S@?groH&04kLSNPiL#i|2lI*7uAPMoua}}zc$86 z4gfQ(+}tYj{|%Qz4?ASOeZg@)2`SV{ot%IkW*OIjOVXHke;=D0!Qf@QdDY^j^mo7;%Mb?tjD+ zB}`q8P~P4aiRy`4qW$8x6y~BXPJiPktEHZ7SX?u1j@okPr*_>^DP{xTE+&X3=PtvC z{W)C4&p021iOP|d{3WG$^m`pneoBq7KncjfO8@B8P1tubB|zl%Q+wgRp7yk)^mL+9 zs`WtEO0=_1TzG5j6*t&dmN3%*5A;I`&!302@i5ODfFvz?4CYzNaNoXe_mI1br??mS z?lU%dJ4TAlZ|;L=uMnk-?^aCnX$#u3Q3o128DD%Z!WfS7#o zF0!>DNtjvIG>EgnlxV@zwWmX(Nt!N1f_hnjjn&*irY zK9@c2at=ig2fn>)OwGnxdw&kA45I~%ZKj{l=L`=OQx?DOEKr;Q_+ytuf&na<{#hI{ z^vuA`fw4VTU8Jl=Lb6%?W?wq`X`^Ta8UO|uVt~?cVEP!<{{aAxl zot$_4WFDgs@tC{G9&5uXImYk!Ndfcx}^bk zWal!4P&&K*@Jg@Sic{PYQ_8sNYKaqt^J*x=X}B$uRu>a<_Z$i2%t%6RLQCzeuavBo zw)^%CO(BZ#gf~2m%xdhsj)NQ`-;LZ7(MgC;2kLV9A+i4_kal5I|Tjm9+oh6uY|sD zxr$L+a)3yQPov@k6o7C=Qj&6r2km*N?yTf7dWMh5p11SHdAnAO)if&5W!imCtL!%qE! zM{F7@Ep>lOynMG6wsJ5{8FzF!ld$+E827~f9gu{^Zu;J|K@VWG2Lxls*Z$5yw z(alb3WgoOFwmp2V@gImRrg@`T-o+vPSE4|Um(Q#*34CCA6}${RA{y2Uw10Mvpj8Ng zY{{$slfl3G+PNNr35uNms690i{nUDN(*sMrYovk-jPSGZ%BvJ2v-!r8>u23cf7A|5 zMjBCzuh>Ytdd?aH;E=&l*U-*`2Ew<>J(-&2utMSuh&2yIx0qa$^GLLwmZ#R45@G=D z)VUz0H2$zxMg8sxE8TiHMi`}iIC^zSR3`NQ>Rov0l~Zo!*bsSvwEBzWi}NVLvRv;{ zGa%s%`)UKGr+W9K_GWoM#lF1YIp#^Y*3R&m?jBA`EKFpnQ&T)ac`$qLZ92rX7J4+c zY8uFQtM|Vd1mo!Kr=n}O2DyYfL#gwB+2XTWJgu!qc%kL-E^tV;^@7>&&%Zvj3uv%g z(4}iBb)xN$Nc=x6S|-%wXEG~5A2{MD?I%Wr?%;SSwzKKLJACNsGR}P-_ELsT7Re8p zHP3oS0CZ>2#Z{Q*FKPBzl{QO{axOkS=C0PQ#Rl^}@2_;UUvrnWohznszdI+ss%@4f z)spba9nS!UD`*C)9-2lS)I@Q8`y5qo+Wt91eMOe@q)`1qqAf0j`IbfZFe66IwZ-wM{tqp%h)`sF%|q7})Dv~VV3xUX#L z#@O+`SA6%eW@V+|BQqXSC3}*1z{2l6wWs+(Yig&UIaB^w#eD0j_E%7kJJsNYF25xRX|ubq^6H7KV4vNVJHwoM<^!L{ht{kr zcLbNzw6Uk2Sc5ez{~O>XA&9h(G(+4&Br=dP9?2YbtVHJV^1>~$I8Uh>hR*D? zsLkd6;B|;A+I1lLK0I@e6V>2oFW5{{naej78@7gZEoLFAq7)lCnT*8znn|~=V10$r zxbtv9KL1JBFW`)mRmDI((ZzaEGjMlf;S}mtU#&ASd85T~To1gyGe7~VIsFN$5Dbr4 z)9KOl?p$d!d=l0NjPdCCCXA3cdx+m)_*8Nr_Vdnnoc<@EqY&eGgu^xJ|7hgwqmsV& zINm+$an5e#ndZavwyu&{I_C^YfxDTew;7vWGv`AkD{=&cWTYsrP3Q7?<>LgYnR%BY zn&LyEloOaKkXnF>QX-HNkf{(LdVkp3-Tt`e+`oV4a6aev{r*0DzwgiM{q~SvH{vZ3 zHQlpjzUo~7)!JaOO(rEkSA>?&`z73R+}}oNapRY!mp8UNTZA=K%ObBi6b8Wq1wEvn zgwYX$OgBd5H9$k><1A-l7*K0fcuN?sXv8q7y<8cbC79}2Oj83d_I#PuzKy#nwXJ^9cGEK> zeI($A#q}QKq2}j-vVzAPqGSD|ii^RwCoi;*w+~005?A=M3tJisU}EG5#BRn&Z2={@ zg%*6#G&z5J^`bT(R$L1y#XOaMqz9_IfZr-=hsiiqdDkuK<`7reul3EYwPOBpcui@8 zH_Zmarr4o8IMc;U^@S2F%T7N6Y63d$z33ug(&`}g;$rOE-Sdsc!ldDynN0)-?Dwc&7Xp!p^()^6C5{#~DALLngWo-}nLX@Ns zkjj9o9{coS*BZbo)D=L7=Lif%x{}Kh)>6@8RP5Pj76TrJ93*eyy$q5v(Js04;DL1L zxd}Tz^cgj8AUwXt(NKP8HLwRou!CWnj^8rG+gk5O)5To%IM zoA!(Sz{*V5LP#Yi^vJJEcBTLxTHR#qk(J9&4;@}BXQ%O8(*mynq8Tm%F@k9bG@G&= zSgOcts^%E@lp|@@Rt!kp@=Rdn{^adV4zeyNbeA;cS}FFnN!`a6kC$_Qw4Ojr2ZRQ+ zkWn;0Ph=>)7ZVn%7+w=n{|CS}m$boZsa7)s(vV%!iUnY2Q3UIuDxC`5Bu0@Eb|O6l z^u~Y$^GQq;iOA9sSz8ZSmQJetp0h7a0thVBA>SVyNHdXRz7Bxs@VPB)LqbApr3xVIb&#f%a zSazhulRNbOQv~)<)qtaJ*F*x<55w*c_(^@j8e})t{7Piwq*ewkoVw6;|^UY8a$cQ4+dA?HUC^Q>>U14+Bco924k1CD4S7d=1>dmf5iL zW~ta?aE_KasWHOnm0-=~*KW^~R*h`oHf45d*7BS%6?fRIu!w*K%%FktaNn+m;o_u= zU^v_$dv&vq*;Ty4t#?l(WG?wW-B!Ww4QS6pz;EldSHd8oq|8tJw&%aEXm9}w!9!W^ zG4kq@{y!#Waw>!%r`+&mVa!Bw9`jpW+1zLeqQ{`$Nu{O6L1^b|D1O~SvVa-%os?V##48;akFg{eC)%}( zID2(8HRe5=koz^M4|5h~GVRIsy!@Pc^n%>-)4!=-5HHm(Ll& zOW!hpG)?YNBe4mQe>uhYCB4yaY9T3Zm{>zd2rC!Ybrk^#XVJb65Ixe_c=u=}ty3`a z<1MjFKq<>yBK3X229(TghCAJ`wYj4CEI$->_`F>{Q_R-zfoHiJFL5 z9H7YA?U~$M@0{vq^~E~n>L#G`$@_f6!1OM+-S)M@k9@Meb@@LbUrBeGAas|b+?4}) zKtm(MEUav&Zo_LW$ug|FqDri`$M27)t(e@kJmk|PJmKG}7^-aOVXuw$@=tWQ^C_^dMAiNx>VVk$AhZ;y7cL3h~MxBO`XA+enD@FTd!#Q~|4y z#7|&6@8Pg69qn9c2j?LJsP2q@@D^o1yc-sU$2dqx`s*!3V9C8x(&l^gDq$@4&T?Tz zW|T7P0QT>hBtol{83Vqo;l}|{Zuud?m}gPi-Tha&HUv7dCFOu!j98dd>R0BouN-fP zpZHnTtJkQ$RB)`>3mbE;v`=YYS=~%`SQiHmmiDE-g!zdkEiMF_p?Wa)Ut#4I30}Y! zRZBv*PWmL%>zFpA_j%F%U4f|C$~Mrr+O#y90-ZL`zEn&D0I^&bc3#y9dUEO=q&C`Z zweI?@x!D$DGG{;5g%L2mi9a45cl)KQI(vUZq zU$=xLpU8S^GNZATX+)O%!O44{)R~UR5VqStWF4Qtp z=RsXF)!y68AqMZgYd4z#?2>isLihAKM-EfN)=cT^PMT%1)wkppLfoamV{2ik;UPy4 KHGOjOoBseto{M7u literal 0 HcmV?d00001 diff --git a/tool_maker/OpenAI_Tools.PNG b/tool_maker/OpenAI_Tools.PNG new file mode 100644 index 0000000000000000000000000000000000000000..54c5b60308f71f4a6a9292194f241aa81326e802 GIT binary patch literal 19260 zcmdVCd0Z1|+a_8rNDJDiYzhLcEn5H)l|_U^R8&wPK-f1WqOt@;46;K*qXGi92q;^S zMIh`;64sy~qyaH(Nq_)pLAHc|2nmo7I7PqjJM*3I_xgVgPXXpIqD)6Yyx;E$f>rn(^i3bRaKFIeMWM}k=UG66?Nf`+Np?}qqeU+8J#~>ab2#m?y9i@2jPdlsXg_-X7osEIn zb0z!vzoZAlYEKpex8v$FywZa!u|Gio`@ER%Y+`UiQkTKsTSHgPHp-y-wI_BCvr0K1 z3!d&%pfvB$(-5E_wY?ojxOmem(?)gWmwz0vsy>a`*4z$VmR;LQ`SOT1zV;esO|Q88 zmUP(&*V5u*@M8|y@lIG?9gp`Nx=9LHO{MinUx5V#dQGjGz7YQ~ano$zim~0z3~MV^ z*__u+G_Rrz%9~4>#B9}W^ZW=rPv<7P4agA@cMkDSp}x(5JEdzV>%gzkUi(B;%=&C4 z97QxY)+Y-mC-{q)ikq=-bND(4H~QP)7zlTsUW?BOU)=D-h#Wm8Qz}$bzD9R!DnTEi zDFpN0hV*{fDt+MWxRiV%ijtmD@GM>oTNJVV_RA%iUyE1mdwi0tvMbQ|a^~>aRKKl} zn}eu(8Y}ju@Da&t>QFs*z4aqkZl~qSFl*J|jriKC_&Z)6#V(dA9cV~kmK91I@gcMz4w^zloq9;wv|1>OY(a$H z_8FjB(7)u!X0D=Y@Uz5h2}Vxr?{mwdjF1w&$Sr}VQ$!g)TMS-Z?Uk93zSXD~v3d1T zmdi!gJi89JAAjn$MS8O8bI@4JGySUMb?uO=QWt{PY|j)6c49wNDeMdWmtHms2N zuzJAA!_HJnr!%c|vGC@0`Q|)9JhOg^(%pAzf>M<3)q@}Wfmr_f8taWJjD}D!!;f$% zzHIWIe>IOKp1lSi37bg}dbt}?QQLR2Sc!vZ^{z>7^_`zG4c*?Sh`80UW%~gV{50#HQZIC!c2@z-DS6qC zx@dx%CRh)@DYet>If>6Mx8S6TKt%OKU8--_aDRf_78HX0I-bZb`h^L}xc*r73=U-% zXXNn}>Gl*prxJ$vVzHaW*rY4JN3+jCaCcDU2>5(d$?)&q;!o5g2JmqPHss4EBu~3E z@(}%87gTSf3)bGL`k<5Vo#P2YT1G-;twW~_z56MzDk>UHy)ubMVoW>Qx@VL{PsXNW z2#5tQi0^&+XX}}2I6MwhAoyK2Mn(G={VEC__A)N^ceBka?igYV5&yJPX3upyPb`v6 z`oImhZ?)Z&EMP}6hXH%B5|fM?5;*+l(lQEEy=4%~*pB$|YkQ)O7bqy z3U*!6S+`zgw_);xqh5qJ^wA^`VKBVx^HZ|dtEfDoHsIc7F0{FNRxdY#Q`#+Vp` zr~Kmmzhx0Q8zTl)Rgn;hz1)R+Xeodd;w-Byo^4 z(EUdd#^kgxkV_tjX7=q7lX7y>Wom=>dHYG46apfmd2t=wHq1WF%>zRyE-ACba z1DF5IPd2+^M)wT0l+zdtad|t!TbX6}`TPxnyiQ&fk17w`Mr$Y*ODKs*8seRzpvCSt z9I6|o5LY^${UKH|6aswka&&~NzIfb`{{jAdJ-0CgU8{cWYr)H!+sk$cFVZNE9o{$J zP`-vq*dXo#O1(>rQX3yxRlNz4l!)oR+!_5@_zy8?DFy;wYZ<<9U#i=^k6$aJ)UT@d z1DN7dEKM*cs(jZCr$WH|VCJq{+!1od(< zRgn%BAdPZr&mzdpkpG6x<4 zo{M(-R75+^guqpewJ!l5KE78!I@Rqplsgb^;_|~^Q3!xbsfbGnK7+fg9x=C00YKg@ z`&I1D5xUUTJ2E@la2<7baGNIHz!nD1bno?OKY8sSBs^Un0Q~xWz_|~&31SmabL}4s z#pB`AoXJ0~E2!ZK8~aWI3xrfLNNn6?VSZd_w z)Qp8P1RW%4&D0(O-dEz8e9*(j#XkqhuX}pIPB(~#g!tY&2oXKx-R_saB_BE2_2_M) z8WrPl%Rkd$%@R#~G*{ZOgYCGV0oyJ|S4+=V)Zcx-BGEoIwk~GhGv*Yj7{y}u%sdj; z876ti|A5Yy`WEKKp&!T>^zJ_BXo$w3K45R-eGE~O+y5yX?6?WNz24=#yjh*DB7CyM zA?U)@WHKhQb+ZItW@tiL>#1-aFH5&Q@9J`Rezuuk%-!SyRo&FHV4r<%HWnE?<8$oi zKt=b>Pq3TtgOoTVJiN(xHbG32F70&R&VFSvKY~XBPpHqs&NrBCmay|H)F5=FeR0e+ zTnSWA4pTV4viPeV0-$zv3#Y0a+B{VeH|nN6a@FtS;Vait8n&D7r{%_@;^m#1LKbL+ zZSO0I%#FVW8iToif&{}KM*zD11Vl_j{^YBCRo;=AYcMP zBLY>PsPt*+bj?Ud*dE~c<9~JsY^JWC^b(OTPuK-`VvO!8;-~P=B=+r&;uKzR4QQDO z>lGlKP2Ls?rOpux_LcgD&9UNmrwwB-zk^dNA0*M7D=$R$lMC_+cg9}=7|i`QB2A~v z+|kiShvwCRCkbq}8}%lykm)_5OZ2bXyl|CXve?Rh5;>sa7`VCKMg0L24EjlLFT*05 zaLycxt(U=l@|MlAaOC&$`5^PGb^q#Wf8y9R)(d&3i(nMmeTXAY+US(^OGU;Byxx`% z>e;N?qY56pkcnZIB*n{nZ{k1mj>~v^Ww{7K(oYM@^d5cN_{dw>Lie z^W{oPHs<^4u*DoHDf_{#htYf*z{-oID7#y$HF+VK*eU zkA4@g)Sw|XYICcmbMUx`ReQvB(p4Dlvpk=Rs7qv%nwGX`j~IwmJ!<;!GdTrA?&_H4 zpiaruJiRK<0}ty=3*HLaHSENfLFmlKnbl<+n*1Htu%SCmu9%)AV2rWEG%zqq3ZUj% z(MM0?)3F$tWS)&7-!r+R!W6M;zqu|znotN_;)e1IJLoyHxOaOK6A70x zq@LXgXE@?4dc=H@*x_aNbEfy(d6}YO3vVZ@$~c(qR&@~@jB+9u_>!qgM>F4?HyXZM z&~-OwI>Wy1OOBWcWhtXz@~R%|YKcBpwvbuWiTwD^!% z)&!fcWBuC+s;xvis~Y9U_IxXLlrGFEofhGzC`UR+oHCq+3k4#i^3G(c!z28`I(Rq5`J31EhHKTOqm@gsi5a9|*xv&?YxzLHPlN z_Nio&Q{cpZRHD^cF7_Vi7P zn+s%5RRU+WFfFb#FJdzyo>Tcs}TB$uO;%$6lyC#eG0EvJRF{w2xeF<|8_F@uhYu^nKd0RczD>d zmH$%snsl-O&%PQ=<1}3iujfR(uV*cW##_r4u@r$Q!7CJ;Z8Ika&U|NIuMUEtl!QGv}bd@JZIQm47U2vp)5&^I83f-mj)+mW8Dfce&R%K%~@}F z$IF@@OYJ`p;7UB&`W@DNYF>Yzjt+3C)vzgVAOK8)jwIWicI1MXZ=t})e;TcQ`EvOE zdTveNrjFNjG&q@MQ7INvB*`llJ}uM^uC{IB1z$gQX%e0gz8hGjr{va{Oepy)Ky>Dv zsvD~{;@Dkqcp-k-{pIsS6FQI7Nq8%Be+-P`CRw`y9@`sUVzBioWzdBF>>31c4}Aru z8X^j(w!oW8Qh-@P?O`iU4=;9vvm_V9CFOIf1*;%}nxknU~yqxSj zt6dKqz$r%1!Ka~5%vlwph()fNk36N)@euz_Aj9~ePYRaOj`BjtgBEZYo3QhBZv`Qnmi)xZQu}Cj$!Bo)9;2>Ka#?b52lOX~F z8ONpU3y&=PX@4X#yKw0V?N7Gv`L8;19v?{LN}nM^d;Ri*nBagZGPfgl%Bjin&n3Aq=a~G>6G57SXw>wG}|o#K_1;tdZ%8U zeCBCLq%6tZ3L9QR516&6b**I&ZA?V?t;)qzk(Zkn9T*YGdS7uzBeI)An!5r>Ozw-& zIfEJwSN;KeCB&Ybo6&rGQ1^QH$&rtR(lAtMF?gZWTfJ*)cx-I|&YvdM$NjoM|2b1-^OQK> z8$WJvT3jcs)8ChC<9kldn-L7>;A$&Isg>TxpP91*WhyO91fzPaslUFwtJzu&UOD>Q zuhKs!HG@Cw<9(77@q<*h!ZAG_QxVYBsb57j3=Nz27iwCEN-@Z{p^uneFtkBqk|p2Z#v<#nLt}R5*5kx-s7^^Z!@)LD znd8W9`GT@>X?0@n>i+SJ2(c5vMK;1qAJ<4(km8&dOun6&CPi5)v zyD~e_$9os9Mj+7gwBA?HO1cP)ReG z?pM=kqVr4ek!K9A@)lyC0IJO;zeb`{EWKgO*>_9b$4PhHAF9VatURSOH^HT+JSKel zz1aU#wV7h)+5TZnMKy)g3G>aYi!5vKovnY*H}_VE;a(^VC-?^*7Y^;Uu54|Ze9)Rh zY`IDomq}*d1g{lkMamstyJ}E5>!LXguU}ImutzHO zIv0^vo-YdJL%r4I{0wm86Kgr#sZWk;vGzz>5k#GClTvND=HdA;<__z=oc&9M$)`53K|ik zq}-{XpXHW9!o7WPt8YpJu>rY!bUYTRt!hYm@NjI|z_eMI+P_N-o1M}-ai*8UF|n?o zb=_gyz)rkL)J?m1Cvem!Ejy)Jf8#yty|RZjC4xlQdhheXci_M%si1Ourm>ymT9IQx z@TuYy~Z>Y9nor=vZ;c=mK9ll0o|M7S9pwGnd{|R zSX{>S8)jj{JAR600xA}pU7zZ(cGilKPFFl)8Gkxkp70C_eOX%aJa;#67mNk}gN4k) zm#y!#w`g}mx9-gR9)8jP1=LP}?zyvhrE>f7EGd2~Anp<73BPfez_X&0{BVx`q_tnj zAQuOTi@aN3WMo5(TVr`Q2zE7w@yE9hNa@3lY!8Q}#(1SK#hrc74VW%G`Ju2u-4-_< zR{$rOJFPLMf+c$9%UqeO@YL?AVM=IQ(#LDO$mb^eY|dr-zQC+lhkr@n~o(2l7CL* ziBD%1v@{1BV2_Y&7uz4xhcOC0U$#2-H;g=G{raNEj|L6o#5+!*ZxXpW)S08QDaQGc zt$y!XJYiqgJ89|-u=%a~3X$dhIc8LjKFrqFgE>RV|g;!$ERC^L! z;`ngXEZ4AEI8*SrSy}1T?C>}HTcm!vFI>Joq$SeFsz<3?Yp5r4_S45le??>c&kko6 zLf`d>PniGdIQ6YcJRsz(Q|3if0n>u6X3+_MQbSzchmear`&7%k_hK6FV8fu@o4O-@ zubDyh#P;*Wbj=TWN8rr3lv^w{p9kE1*esu zjzd536_Q+5(5N*SZG9?OCp*j~@_n_NZRN(;s4Cz7ugzgg96hMqan_Hw&R($A%P~i0 zDN=C$=hsXg`;-Pw)@LB+U?kos$~&oA5K@%1w2EGv@}K=+^!Keiy0tD|9NzXUC10xn z;YfdtW{Ug%rMdmyqS=9Y>+{95A=!_btc>R}r1s=;eo&epHAmNV?0o@mQ1C+hMxlWe zC+O)!T`wmtLUt;FWi8-)lnCB5Q}0AiT!@(GuG+mKXPFap$?GYkJZFr^L0a0E@1T37?s?AePWsI8n^44i1kD!?7|l-FBsNiD~Nr(b!cl{ttv2d?%fKV$;Y5>RUIiVfeFm=)e=28%-m*y=)dfH0XO} ztpR*t$X9wq%k3a4{Wno;XV;;L&n4pJ{*OK%`?*}abir@7fDK*7rZjDqm!c^ zQDM<}*TzlxLFbk8R<}mpU>n(61Ae2_g2RmZ>GgXT0#j#rpZ1PEfN8ybyz0ib$ZMgp zEHQq8n=+((dHt?U+YOxxNxfe-1`S|=Lr77+0!!C*yGHDZfzs12VissYLm$-j3g0BN^IgXuH!3BNC&=LNEu)9k7Koaa7SHgSH( zGY^#xaLQX>MGj{C=w%Tq(=ArzE`q&bRaWU~w%{SZZzM7VmO+{<>GB(+ta!G(6R4)! z3CI_>m^eIM8MO+6NND-Ew-b^2HAu&8dpujg+LzJ8iCZBwkgag{>Dv4s8MEk>%EK1t zlgxaI_=f%R7Jc-Jy5sR2kqR)%-f3Q+1PHN$0PnL?wpI!u+>a%5m$b->LQ~%ZP zh8AU+$4QpABL-R%0HJ|*2_Kgdr zirE%dR(c@=Hm_YRVkM2V4?A$DYCiE^U+e{Yta0s>G=toWB6P1x zN0x%49DfpaWrOx>v(Awu%WbWrbD@77eT_!td97w1wL@Z*+w`8SPe~?wpvPp{X-vU` ztaSeal$G%uZ!6!FF~%~#lB%3!!8Gt$rN0|H?NSN8d_s7$z}uNKO{F2Zm6X->Pb*Ze zX4z7hMeGz~)Pex(4KOhLSvFuQD|;jBbBr%*o}1G;uA8<;5 zXMy_ujicrv$5iKi{nKk0_LA5YZ1LE`a_7V-N1i4h?`WFIo$ z^agz{Me+FZ`uh^6Xmh$8M;!ti8bNgB=;er`_VDk#@0o$o3Y;-x48>K(yKO`2ui7~o zON0wg-_brVk-j`+$%#`w}gSddn0`ty?KYrI4RZjCpR<(YTRn&S~^X0MIy$dC!0*2?W)ovN* z^(Ss9>Xy`z<`k6-S{Y3jX5#jjvyPtcq|T{nRm5fU<~gZ)@0lBQ+uyYOmacDrA;WuP ze%kO-95KMGo6{w5xxC$?KYy=576sK=|OTmsIlQ!q|$*oyKa4SBr(FBo>r5@ zB1{P+`B`l6`9_=a-&YN!Qr`Sbm9*KId8~|>+IzG0!ZY*NoteL8Rk3xGmhzimtbES3 zW%`g`eKtFjjkbVLA|9jctmtxn1+OxHu@Wq7+GgwBJ14*5mAOTHmSi-wuFYcy4bmHs zFjj{dG1=rIX_J)_(p!Lpy{b(999dz~0BvZ!IoEo{ft0MK_2txdW5_hOx0=3D9gu?< zom$~r%I(=6eT-2^rFuJ1OGtW;!<57A?5rq9H|sN|e0=F6MN1d0d23c8YHCP*xRInq z^!j@UjW6Dk^Ajo)OZ0A9=s^wY1uHwI7_Aiev{ZL*wFPaOu<#as^g%8H-5fZ9@VCM; z`UX+rer=rvR^^Nl;zs8XbiM18eEt@)kFnOL_277GpP#pLVaUShcBlVZoA=A%wZNu0 z!Hppe5^Jq|$vFWv%KNsJ#E8U2%m!t2YGEEwy>ny_CI(=o*1Co&0~T5rO8fD2Qs?qZ z=9|aa3P;uovDV32Lw<6V^9@In!wvTJa>}9QLfJCePB^YvVT&?0lC)`rpY? zQ%X}vE!E?ZdjTVe!|MtZFg`sJq@q5h=+x=1Ui0)>GQXWVPuNo$?$_#RhI2so*8nB;e#h;4I|t@!PrNAcUvQ2*7$;DP zU4BL@U2_&Ehz)`rNv?I<3yfOJ&XZ$jLdzC4SFL~W&uWx#3m1RYp6IN>!ZKxL_~$NM zPok20re5i3IxV%tOp~J^pf23AQh#9WJs~Z{|pxYgq5td;B zmYZh!^}Uj3D4Pcjbobhn``ffE{(ve=FXJl1%MTQ`KNFK)YFTq#{@yN@PR1E@LyFCF z44h!Qxo^gEy7Yef37hSdDm>}sCG@4sc&=7OR81`;hxg(iRHBX!RE4nASKj;0UXt?_( zW4(_i&T$C=r4sMT-1?7KHn;|C<$>yqn+G8PAoYK=*6?qO3s1(wEp{{#l5{$qlaUuz z=HXbRk8P4I(lJj*9p4dKndr(|92W&O66zYo?ngQ0nZjUTB-}F)Z8|-tR!Q@Y|52@> z4Awwv9ZlhK5p!Q_>@N+YVCDSoca=KPUSLhZf9^Noq7&~)j`!TxRde0p8;1~ZED5rz;J%!`y!us5xq7t6t{QRPWjm4SD)nGB#B~Z0A@|h=AVOIiZAL%f!y{WM;TJ>wxl*(C7l^ChV)Es*^=4{IvY)2N?p`lSMagtY(r=}{@O)1Ts z4iJ?Rm20w5mOAgKTQ|WQT(KYeyIns};BfmpVOm1naVEyAObyaxqrz;LCoO~O3$j&A zx7+fZsmi`op+qX*{o@oNxudP;**i_2c* z8$SJgb}@L}ABMqT?h^HXe32%D(Dj=*ecHL+*~cdgo@Ee>58|`W?PzZt+VlQPof=f8 zcvgW0a9970A4MH(I=v(N;uEGe5QWbF*D>8u)#{4HU$UF*!`8nl|1J(3zm&mG-PIkf z1r7pJ%Y%o2hgGVgo%J<4?7y2;dYH4xF5tCL_wWBG2)xc}gzX2SK)FT>7Cd4#Q7rt; zzMbuVl-gH0SAF@rfEQC6bb_$oX*Ss9mDPr`epxR(Vul8;Qcz6(CRYQh%R-l z(F3(iXX2+w$3FMbnaM!l+&LpES!0IR8f1+U+_`P{DDgRBM<0&n|`zH|FC_$&v_22vH@vyT1V2 zMG?NHkoiLLNbSDVHbw5?^p05>y91@aTEGV926w022C8Ta5U+rMT34&s^vm6C-iP<5 zp7Vn{*f)PKPY;PP?ZHh{gbk5qW^iELw=|EW^-#itfidOO!|{$J45Kzr=Lm{mwOu91~l!$6UoUJ zaAreOBImtHm9<(Ov(%rXV!&o0=D)j(SDHP_C>DG>9nm!WG{@VWkhkB^ zeDO%TA}+gsO3%Di9q&=63F@R|Bb;3XV(iwXD^`4FdIyK7A!$ghc^4Sw!YQ8?xfD8q z&J@2SAF%F*cGM!#Xw!ZOuR%a*FFK4?qr$5_u8<@?Y|ZnZVz!n3I$59h zh+a7Qro$kmgxLw*JPXRWv}76Ay|NrDI9Qs;gKZ?-A=6(y2VYyNAb4@2Xu7rzNhhh5 zybIr5we4;{BCRfAh1_();`T!_j|vLjn>e)M=zEW0NGP92LqCGo4;{;E)Ve!av+g{F zBL{p^icrum3CwrkGK5o$EBx$O=cVY^%3LB3l%ZJO#d4*7XdeYZ#GXIX!`sP3Y{e^{ z(qF!1X7D((z>M=YGaFRQ@VBGN<0mqF?w4akURQYNbPnBRldwk!7RN>-Ifr9RA8(cs z?8vncO8< z6s1nb$fLEKUVbXB^$&5xUdbp*t*srR@zDV4Y!gnxPxkOX_kqY{4|5Qrbfw%+xcFr8 zZ>3E3KZmdvaw=w$iz0p%0*e1N%6DMMnYif*&YchYRR1~jeJ7e-nn7vU*w!B4wL|Vd z1zSfHXP+rVSna5E5Z%$nmGAGa=DrG;J)O4aTek@CvwW|h8}RseH0=l?PlBR|S+rob zpT8^$P}BdVCrkEM^Z*rodHh#0WyIxomCDSf)xTk{^&haeX$(t=2jLL_)InkOf56rL z|79PlE|*8CjIart+K`TaM&xRyOt)QENasYXPGL@^hh}Ed;z<#+ah}Mzzc!F*C(^o|g`yZ7{i=J{LM|bY-j5aY&=0}^-iw$vVQt|%B zfhbD#eIv;|e-j_(H%~JUSs>>qw0ruLVQizg6hZQbwRnNA2 zsuVAzVU}cpsz$DQv4ncKQDLKtor{z?WR$2@{OMV0e5>S7fW4tN+E~)zL9dlG@UjTU zf6=|BiY?-e{tQ%wfTvx3_q7tciGqpm>1A*Srz&!5paQ(h6abVZ86Jf6pN)-64T@Ei zx@}8(_6oe5D;?45c#9jmK6p8qg8a)FDR6dp3F^oHH!X0MF-#tZ{I{6ZGu8M%sb(D= zh+$I8HRBVcHKk|2xTs77&-!ev4_ZXJRtf>R467K^ud9;@eiD@}+rSLr zx)YH#QXl;-bp&=f8ngOH2@0M`m<@oO6)8O9{9L^ULpG;p~ezLv!r2DU9BELAygR%%~ zW?)X6imOLlMO`D&UP}VuxailRijX)ha;(Xw=*POag}L7`uk51Pt+Yu{WLfUPBoT^8 zE_4}LdM(+I@qjc;M5Dxj@QA4aD#0@7eaOJkKg`EltWB38!1K@ zgd7bD5`t7LZf zU6tai>_C(PJ&En$;@vY!eRNC&Qcge_lLhv>Zp-sNGb1ly4-QMyCnq`IT@RJ z`KHfB@G8zx#6@d-VoW7$p)VzzAtBJmiXKm+yW6hpdY9+7=5X}W?LUqp+hvc$+}Y%0IWKMfQV4_)Bi9S;0lE(U?N+Y;I{R{FF}*I?%~f5jI@-aP;>ww zC|{TYDr8izC`;a`l|Yq(yI%Ikc>VFRCU0jIl(Y~Qtv=RfwxKQoQJ_x7#m*WN!CRS} zO4vrtOhcS)c3s-?o!D#+S{t%RKvqT+sWY~>oI1!A&gHZu{5PomRd_YSXsHE&{iMCEywD9P8S zn%%9@{h?)1H4pW-T6AU6R(7_v1DIggW?y?Kw`h!0G{?EWMDdFk91Gih&MfjhV|ye5 zD7|h}aLLrga2|4#CE-2EqaOw-CtC2QS$~Oco8BT%$9qamPqZ+6KqJAs&VYY-1{3Z&F^3Gp$wTtNZWUBFA<+hrqK9Ly5Os zMOD>A)IhB?Wtmm>*)*jYgGfx<=&~Ve7_6S#_mV=<7mANIMvxeeSsW^c<1zFsagezV zZR$KHN=g3hFzFa+nqSpdGt2WtN*WT-qZ5@Yq?_R-7KT!S=11sSt((4^ zNiSJ`&>&Yi-0D^|U2UfD%0K+LomUxPd%fAJ#_6{}508Z&$pps`_y#?cDXDGL;=RYDm3}$N;0+d}XABAMmc}`>a=@@v+275MwJ8~`G!n$B^s8BP ztmKx9mF7nj`We*1$A{gHKa<vWlsp+2O=^YNx_`rg#G(Y|2GPCA9~3+sNlxBA8>>*tb+da3bq;IwAH$EuFIhiU zXQ$m|4)6XGQ-PJR6)C^b)5$v-3y!st$nT{Otq!SXty=H$An?`etT1l9e6fgXz1SCG zzk=9(rlzV!({8RIeuh(h+4R~$EOGmCiFYtfe@nDbZrSv6#b2GEFW^o3?wdx1YAQ|f zrY;^DpqpS4Rwx9#Kn>--swl+kL_VYflGCD zV0d}^z<-C8=B6k_Fy$i96ffDfG?b!Px@>*a1bzVd zPD>gzsgCX23;bDZL_up8!jg(%H({(tk`-9smq>1U1Y1m*nXIz4&`k+p>wz~nB&`hU zjHWPocFU3RuE|q@J)oB;Jn|!VsB?Z2v?~me@8#ncDi^)@=r%XL(3wx$Bf*k_3n4%B)tX9ukVRKP=11qJpc7ApSS(>JI;#4Hj@;b3X+`IcIe%Tlt!83%_{PQg0_{aDlUXJlrlTmI~Jum#8q*Y_h z$098=BQ7$8qbP?eUoY6^x;Px;KkCR!lSgcEbHabSYddZ(LW5O!1qPt)4|#(ZXkV}*Vbn7k_KnPp_m8H<-jR&))tkwEStEZjvjzU3!~ z*}7wq`>vBE;SQ_ukfbZHax=;DK?{0%{pYpHSskK7=cpvYuELj^EG7B?;NMTvG#$3F zb<1|p&@PsM2I2}X4M?W$tH>)=FD=q{^dpylzAT8(FLg_;_n~T9uUGS?ZGGGX_GQJ;C`x`Q1Poo& z25PE_H%{d5vtcT1Lb4>TiYN;Zmqyr7(ej~LyPkF7Rv^k><%N0~h}Iz$dBNdemTeh> zU226h9Vc+Uux(MrNL=BON+qtN@6;|J|AQ5LUZlGs&&tkW$8xf6;R#}htafn$3+t8B z7I{MyIFm0d8L4*|IP)Hi_~xkDmz6m}37~}m-7qc;JO@);0C@4+|GEqoM772U!MPpA z8SdzD*xB+wC0)m_<~E{P36ql@JsqHCIR44R1t;j%>L7;Zy$iU|5WEuT|Fvz}eTO6X z)>Yd<_rts`DXzfx{o4KQz9vA#4v*!cK_W<9noJbLQH!gwGP6O%j z%Oz?r!=iQpTn5;D(TRZh0m|eGxm9IxtigK6QK4|PRswfn$D1SchcEw_VG`NqI$_T< zkoZ&i+3@>q{~det^uKZ+*NZUkrGI$72&}^Pp&}mi$LcxGs|e5#JmhBX14|{JR4EMl6&6nF#{w*{m?Vx#~2} zao+`<;u(;s|1%qEQ3Ia`Rmu9$=l4_-BJOE}#wSHG;AjyOA&pz@1}ky8;n2!;=73`TS@&?>vOSDR zi|$UoGW3+unXI_@Y_3|NDTmH2(E`1%T*+0<)@EzXWA7WaUhv|1_B)xFTG0?;qY>0! z1gFKN+&A?a;Oy!nxQcIryy>-|bq@5*Dg3l@R)E{fM&x+~$1#U_o&JDs@5bX=ptU3D(xS)cN|f}`IFzFb=)b!4_r(XY zmO2wNIj`=E!{JYHRNaiKMDRsQ_G-#GUndRY$-mf@pM|eqvr5jef1MgmS zpq%!o+|Do`jrXf2+GqNQNtp*4PbXV?VFb_)j^o6W16J%^;8XI5Ut?ed41*Hi1Np>P zZ`s*k^LO5YX`q{hu85k#4UyB5 zh6;_Z;(elbJUz;Y{QWrZ7PA?!yHRp5W1vIN{lsj?{)>4R^`N&UkaklRLV2I1o5QM?DTrU!ek&yBt!}(-GwERL zz6!g%`-5v=~y3v{LP)9#+4-hB$0Xn;r!VFSjd)Q&WA} zs;^zcIed@)sbIggM3DFU3Huo=7jR!eEv){W%M9UCI1celFIiO6+WuC5!Ti2qZhAZp$S&7w+obR?uFfE1?0QgZ-4Y;(Q{R!Q3pJbJV;!=7x5 z?dj<36zmZQDdx?-7g!!9$=vrY~K9J%s^pj$1En&a;zbs2>h&o#}3|j1su4!A0m2koO>?h zIHokoS2%EFw9)c-$X_3KN+6!8J7yq9!4A^>?H5vTYes0ASciJqand_@Z_o=smGpus zbEssk2hPkxW5xHBRKD={z`f430_JN~EbQ#HUL&sC*|=aF9N>mK---Zz(=Bx>Nd|(? zpT)PzjBwImF{hz_)Gm67E;`=qHzfCX(3XXh^&r;u?AuME<0<>%VFL4mzoP z1GNjCLTfYDbpFb-v&k1ut;nz~!jvdLu!z{I^bG^wxCx6Q*K2Lieeqo25d+QN!ZL)313t zZg%iODEX|^p^PzPb=O42!_99??Imi(LlvXC(5o|^gU^P+DxwF-vl6NuA8tT_UxbCv z7BA4lw8E^6JTUhQp-r!XJdoloF7KYlnyOzh@<7QboV5lawAs?{yE&=ABycey8V}~BqC5df7;Oxy#VQH`xt#s z*GPNly?wYO=Y)<6pGum3*+3IDM?{C^Z({+qzk=LhEA4?~tabmM8R zp|$GD=FR8AE6@ON0DQ2juN!xF3EN{pzOe(@x^i{7kGAEJzd1*OsYY;0-6R=7Td|WA zJBA92S(M`)T!5}Vh%4n9vkTMqd^r2}=-&2U|H?bo2-bwvpFgIP=lsJe^c&@dJEECj z^?`OX@Utw!g(YESA=rw1^J3O^W9+g9MHzT8JzUCA-BBXeY<^V`n6y0B+0h8lcPg`GAmNuBZp4W}EbTWqE;IWfltGLBv*oo>aYlt9(5y|#KTzH%dVseR{ee^H z;uVc1kcG2m(!*SgQh5eF3zTvgBf>`-NK@QLDVy1k4YK*x-UY+wE~6Yny=Iw|-Wq#D zRM8~A1~?C$lhO)-j}SK8*9)S%p=8tWK*gS)A>MAr$%3=c%H35yJtB}MtV!%MMxaXZ zrp@b_Yd8;%ps^zN2V2v9HTTHBLzuf0xj!)HHpm%PtT4R}&hHXYEmic~c*4l71G43` z01aKJ!R8bKzVX)z+9NI3%4uX@>g5M7BEO3Qtc6&=7DU?k^AmSJer>;umXP47R+}Hd z48Ir4IC>XfE|gkMAgX_>Ik@vsE?GS+ncL_~l>lzP5dup+#`%4HU%!~g8ZoB#(fH>+^l zegS4jDz9xF_JUtI_&3^w|Ef~>d+mi}$2O3D7xzsi&ub&vH8`C%eSNdFBd#f AumAu6 literal 0 HcmV?d00001 diff --git a/tool_maker/README.md b/tool_maker/README.md index 1e24731..ca0e9b4 100644 --- a/tool_maker/README.md +++ b/tool_maker/README.md @@ -1,5 +1,58 @@ # Tool Maker -This preliminary experiment has to do with getting the OpenAI Assistant endpoint to create agents that can create tools (functions). The ability to instantiate any tool from scratch will be critical to enabling a fully autonomous swarm. +This preliminary experiment has to do with getting the OpenAI Assistant endpoint to create agents that can create tools (functions). The ability to instantiate any tool from scratch will be critical to enabling a fully autonomous swarm. -This function is as-yet undefined. \ No newline at end of file +This function is as-yet undefined. + +# Version 1 +run the ```tool_maker_demo.py``` file. + +You will be prompted to define a tool for creation. The assistant will then generate an OpenAI tool compatible JSON schema defining the name of the new function, it's description and the input argument schema. It will proceed to add this tool to the current assistant. + +(Example) +![[DEMO.png]] + +(Update to assistant on OpenAI servers) +![[OpenAI_Tools.png]] + +## Assistant Instructions + +``` +Instruction Set for Assistant-to-be-Tool_Creator: + +Initialize: Prepare to receive input for the creation of a new function using the request_function tool. + +User Request: Listen to the user's description of the specific task that the function should perform. + +Function Name: a. Derived from the task description, formulate a concise and descriptive function name. b. Aim for clarity and specificity to convey the function's purpose effectively. + +Function Description: a. Write a clear and precise description of the function's expected behavior. b. Include details about what the function will accomplish and any side effects. c. (Emphasize) Ensure that the description explicitly communicates the function's intended outcome to avoid ambiguity. + +Input Arguments JSON Schema: a. Based on the requirements of the task, define the JSON schema for the input arguments. b. The schema should be comprehensive and must specify the types, required fields, and constraints for each input argument. c. Ensure that the schema aligns with the user's requirements and the function's intended behavior. + +Validation: Cross-check the name, description, and JSON schema against the user's requirements to confirm accuracy and completeness. + +Execution: Utilize the request_function tool with the following inputs: + +name: [Function Name] + +descriptions: [Function Description] + +input_argument_json_schema: [Input Arguments JSON Schema] + +Feedback Loop: Promptly present the newly created function specifications to the user for any feedback or necessary revisions. + +Iterate: Make adjustments as requested by the user, refining the function name, description, and input argument schema until it meets the user's satisfaction. + +Finalize: Once the user gives approval, consider the function creation process complete. + +Note: Remember to prioritize user requirements and emphasize clear communication in the function description, as highlighted by the user.``` +``` + +## Flowchart +![[flow.png]] + +# Next Steps +This process allows the assistant to request a function with some parameters, and allows for the addition of that function to the assistants tool, but no such function is ever actually made. + +A function object needs to be saved that a function_finder can look for to run the output parameters of the assistant, even the output in the example image above is a hallucination. \ No newline at end of file diff --git a/tool_maker/flow.png b/tool_maker/flow.png new file mode 100644 index 0000000000000000000000000000000000000000..29ee97c3f11fc01041a5832af566f2707dc46675 GIT binary patch literal 24653 zcmd42cT`hd^C%oZ0Y!>*1SulD8A6eci1e?S{jPPH}Bj8fk4E{O7c1&5IzP3 z!VM(E16D}YiW`9cm@Sla)IlI`b`U5q7z8>2mI9YSAh!n~(CTv#NE`tI(Y#En)0PAt ze9}^T_T=*Na&c+7w!Z1$;PCY9%s(hx{E>>Ii}&W%@4kVdm^g%*j?u*AbZdKOVM%%V z#~c$2TPxdFQu6BhCYDjLiTOoe;C9X<*s+bxEhPk{oyFP1tkEB1g^^J zPY9NXZj#=i-(0`H00NmhDa${4<~g>J9zMKV5SMYC`~_o=w4b;#$BKq{9;CPBeA{P^1s z4hlp7zxge_OyP~1xFFDGpcAgfGvQ5V5NH->`kJwl)!Zp12sDFp#6BZ$bSe!5kmt3= z8xOyQY(ap4QJhH)q^Nlg0fBhEzx#W-eN*aaQnjbUZ>D63<= z-^}nvLM@E_RunTFWQ@oRqf4&cXovb_Fv6(sAHdo?&0<9Ppj}?6C8>GpBgFyF+q!}) z{R#;T((;}>)hUdN)#QqZK`CP*soc#_uk zwRD6y*#8B(D0R9d^X>?yW_|A!fo&X8AHObFP=la=CyWj>(yE6soJ*}Jq0_wImBR4+~fyQ3~$*4eh3YHHch<6`vun;yoQ zi}tC^JMwt76R)t$DY~+J|8UiqsTdgF>zdX3zI&n>$p97FlKz_@V?CT>vF<15m)-`Ub(6w2FvsnB0VA3GFc3$hBpb8>l^OOl;}=O>~(L! zVpZEqZ~^7rwu!c1&`*#cpO5!!`_Yn}o8!#?Sh*S#B^fL3KQN?Fu?JRHZM4nKD5#WK zYu|l(2TF}eqeQPWn?^!=iSf=Ui`aElxKGLqSq-g(NzrXLnvA@%QWGk2Md_8F>MLnv zZN~Y}3`-8JNTiM38upgj9ME5N?d#jml8Hg47TarGcI`bT2xHn zEzUt|?yJ<_9Us;i`3_Y>0r6HjX4#Nm4$1J1@>06Y&3(fDx%Q{%M5ivFofCWyKB1M% z5mwMq%~hS-T6D+HFqW=u&XW&L)a7?Qk0>x25<)t!stPX&ls6-B?yqx}6nd&3OPl)Y zL{XDZ)Ue$V%`5B>;YC;dRN%{YC+13+8 zI30$($dO=$=ZIPI2RjZI@W`*3E6}{I1MTk~2O$>b^4}c|)pb8dtI5O)LqPmd3qN=} z#vEOVl?C598E&n4vBM+Hpak#vU~SRYb-Q6QaxQT--YPq=gr#)bB5+`&No2ckB-wcJ^c3-d8lMzUS<`azi>M^&R z5G#nL&AGdX_J=?eOmA;K@~zdgJvGd*zZKHbFcOK74p%Q(eT2zX5c;0V9_eZ^5Vu13 z?yGXMxj_Wf8keqqg~n>~iy=#4_vo4)W^Zs-1+&~~@pcbxsqpbWE0zab$mx!LKzalg z#GCmJpg6W!!(0iVkG(|*mYQf#J#fnd70lv$Xo`BzOTen(39Is}8+@ZP`UVl$=hP2( z7ie_nmRE`~!7DU73K)dft|2Drx&Nzrc-t^jnq=+r(W=h(6gtrD=Nxcsjrj{I(y9ab zc*KeU`fRGSVjzXC%d`|JMT0uDgTKY+u~vj}!7b)vG+f%$xqIAW&`y|WGPC@l&rVRs zPjrgr11>|*X&oU8sm2gZ^GCMNr|y@ci;#}RbqPIkTg?i8Or?Qo3q-%2cr<)Nio75R z`2VeQ5T&X(4`ss&mG=`MfBha$I0&9R zrCLy}t0+=8xNMH_%7kVnwLDBeVus5dSGn3--aG857Pe_xhh(uU55qFC??>?imuDi= z4k{xpCm3Zg(dziI(uGSNV6GEge6rp3nx_G@?Uw9#gk}fhhl@e+)-d=zEzI`KhVjPD zt5cpv6+9p&nIJ&>V$G$|nQf67uAxj2ek&HRi_-_OtY>!V*vCl%uP-%$?M5eyw)JSH zZDn7)#k{|B@9I1W$09!QA_XI@ia!iSaV=TJf%(zu_Wpf0Y=9b8fmq zXQe1?Pe0eBgZq9)w0};1!EjBKQBxdOlVID9===n=6T~}v8&s&tmial;o~AnSMdm0! z-@t%48INeJV*6jg;;imzw`n~=tO3kczuCXxa@j)ON z^q~_juOf$@^v1YRt(q#JJPd31?HJ6B9L`9xHJz&BRhBFbR8weaR{%t9bdOl z>E57yW$V>A(#N&bD1U~x`H$$%vtlKG zhy`N8uyxuj>I%UTok)m@<>-h4B9n{5{7?}v;DQJQ2v}o z^eWtV!0{fi+_vYtDx)fxomW|vrg}bX@b4JC)fEMP^-D+KY_6a4N!DKwOCJ71xC$NR z2u~#dQz%r8m3%dUP|W!p7aa3yD`0Y;2&Kf^XNL={DGf5ipX)pKjwFwif3UuG6=6K^ z#pxn%GL#O!4OEXgDek}qoI?dD1k?NQFzQxs&CLw4*u$|~l7DP)1~ z#2^*?;HK=(;v_(P-{@>gBferibiHL15mcs|^6V#5h!?vRdov>O>DZIpF^z$sCoRxA zY1z*H{YU%ps;nvc^9x6M-XsH2&pEU%7VKp)SINFC<%hGbm8%U1B=FjSI|6xl|7>6! zfCb6l^q=fH=cTjw-;`eV7JzX44+0oKSo#M64FV8I|3ScMOfCYL;QfPh4Zyka&%VDs zNb2iEd=)>?LB2N&HImzlOg4KPUNKH zBtB4mcc0s;;-zxUE^mQ@%a|!4K#W!5g#EzahJn&J0~pt${K0F)>$M=jvh(RqDsHlx z-;#9s3##9jaTJ#3(#`X4v2-(5ONB*jZ1^qRjD{T{26`tKxlJRhH9Z*x_F+ZV|6Il(KH<0+OO}YXd zh?`_+Wa4?)PcVN}><1hz$pi~#8Xc@{I3X70Zens|DGASQobm;qcDy8uh&&4)M@uxkre6MkQgWb}~Z=p{W^n>taBM2a978Os}KP(w+pi3r`Smp35Ygy3BwfCa;}svhU%4vRo*5#$n5_t@xDJLsKS zFT4A$P_|x0t+x3Np(v$<3~|2R3vgG|Jz>E33gN4X5AS@ln48`AA&3(XLpm|TXMO>Z zuk0lwsA>nL^3ERmsbu^727LvBDw~@!SHO+eueD9Vn8cOERpIyQX{1~vU37DK@B%3; zN#*8+rVXmfdo-rEpe6E|Q=0}0`~ryV$N>k_UtCKAGx_r}q}0IHAoUV}3-QKzIf}%+ zckP?9Iw!Z<_><&co1ajUir23lX2_9iF)@g_4e`^0E*%0cwOV3#AWU}a?ndp^!Y~hT z!MntO#%UXDJPU}o+aHho{QahDD>S;^BG}YAl=v{!MsW#=i#ZkamOT*2b*1QRn*{Bn zcEp{`r9P8!3eS{Sci?W%J3U88t*`+bk!sjI~8RVq{Z&HyGMxo=l0dd@u<7PVHN1^aU z8xM>(()Q_$C|(ZpX!t&yYay#xrN3Io^@+^j8#Z;eP`&9iYpRet2KZ-LXI**gyl^g~ z*MS2>per<3he|j$p~fk~CjR)jPO(b_lUD3nTQKc?@fZz~iZ?JSw(uQ+%yJasRV}XL zZ(Km!5Kczw2lky!57KWd6WG~-dG@m{AmQ-?3xMqZvcRvCogaH!H zk$8~MB29qfxQ5L5BENx(G?k5q9D>?*qlC;+xcQ1e7K(zgeyRc4i^%Lm>QIH_dD1OPZ_~cw>K2xF} zQpm=`3a7J59)b<4roZ=l0At6z-d7MJhZI;#mN)eO0yZ-8KY}sysDF%Yf?og3C3UK5 zu6toc$s2L{6Q;LD7@O_S50CB+vzJ@BSG}-QtEfQ&B!RzXh~DacU%+$N^ut-Uba<;0 z;3ELnQ`NK2pML}sY4fE7Za~}va-~RQzG^CXF z8F=9N33`Q#6g!8=o#y@JCFbK(8-;qdLS@X}LdR#4FmHXs6%#8IBHHaqO-V}Ps^;WW zjKh0XI~E?7`eM8HzdJ?u`#S`f`ILCqx!^4=Wwk~0^*#kUW+hP{U4r;V$>(1&fne2^ z!r@4NiL8{!$9dUBiRBiW=%{kDePcxJkHd#a0bI70qh^3u%hqMPM3p@fQR;|peCP|ORxcb#V1+ zB2`a?*m9~?niJARC#t(%Gk-GKa+GlVU?46wR8+Bl?VgM7FzJcU||}2@@5#LjYLh<7V#Akry9) zHPf4uKHfQst5-Lv7Q;b`ZkOj5&ZBOq&DT>$KNc_7TUx@fIdb*@@9_cTYOe9GlSAQ- zwene0Oj7XNU8zU0kZ&k!B1?F6PcdR;oH>0-W0I~fx~K>Je5{D1v@N888j>`6J?^yu znZOb`UuV<054>l=>`S%n?0yQg{6O|3b3&58<4IITA>H_L0MFAo*}Qk~w%X`Tc(j&( zNEeQ__0oJLVx>a5obi%!Z`_WU{1B!W6Kxnz@>yfaVlY=AM{9Gzj2TYE4-;{+(X@pq8&Gx_V%n$;=W?UpRDNxGS)L+i zaNFBm$21aoY$mm_B;*ZgsGvr0V~|s|5o5)4x&~7dY(ku*(8gh=53AYlQC=_7AB6f z06&F=TBhVJxCvz)Pd|`pZR5!`3WruPimPW^<#6OwqleO3YdOvOdA!Mv?eoKNBw)Kpm5Wr5@otH*a%h?h20v);Cma# zL>eBVxb8~rD`55#_FwYAS6qKamDLNQwh_YUTavSX1x(QVBcKRcjnrq{2b^jd>3*%z z6r#GgDEcn;arT}K@J4R+v`Ual9aeoRm1=Db6l@2VYqC)J?0!v{Pd(FBxMyyZ?CXql4RY`N~R&sW?o68Nm0Zzo5k{81qNO&pGu zM`%u0UZPvqi@DKsMBj%}o-uE=*!I8Wl_18;N0?^3QuKNdTVZB@0J9nyDx?~gE2-70 zqAe4K^cJ&@rWH+2?2<-(v^=lqOyw-_wRQTnTg(=NieB-b9_JO) zKYoO9{Brt0pB0|3>@Xq~1}%_z)Gn0ky)(uzC22IEMGPqF`BzcOdSkf6Y0R65T@dfcIu7=aTGPB!Vu~c04!1pF_6tmybH<|( zK(EKM6`Jj6^YevazejGtR?oOHrO2ALNz}Zu;I`sRa9&TgT;AwK(1MBBv-m#kddBJ8 zAkCLslC)H>zj(61qnFn9vWyMMP}hBFnt>w2Ly3QpeE9onH9WnwsD|s-*0{k-Wz#S$ zhpJxFJb?0YhKRtIW^zpG`to)ADp zvE1e$3}1q|H|_{rL)+s_U%%?HJe25dE!r`@$T{Imraqa&Z6vyC)Vy@hh{PUuHjypI z5*_hhUGSQ*Ufl4IC1X9N#eTT}<7N+>tIkeO&r|TO$wf#&62*=4SN}s&vvoaq!HjUP z{f}104`$XSsZbP20W3e6Xnvc3fCf+?YF7<|SbFmbr; zUWCa0OQS4AI_kekpFR3F5U2J9U0PcEgEJREp_yiytGg*Sj~)gh3=UO2$_Dff!{P#| z4C|pJD_(Q5(&wN$8s|KBHLPW*)eE;f(;kZ=Twa%_cLTunSOKw^D`dK#az5coRO=fpV(?{2d zuK^Vhd_St1pDJZ_bILB$(B0=-!T@W102B`z(6o{pFMWvmK2?Yd#&i5oic1sm)!#s$ zZ|$u7Z zjWqe3^h`@1*&n>U9#3=fh6OGW)Ogd_lWCd zgg&Ks=v$+28gDZE2%eZI@9SI@Y&ixYx70Hx@!l;Vfu?iVF#=iGW;sJ z4f|*5Gq6OxTqr`xcgeS*Oi+`#o+;zv-b;Z^y*Rleg@`t?wFd9*`;FcrmcEVB3YdOg zWuN}J)H|E*v)+nk{@T;UXUDuKtQX2BqkXCxs zy1sdsm=RFCuv_h4-SSY=Y+s4n@^UwSC@K^Dcr>r+4EhIuJ$w51Sg>I4wC591@SM(( zA&K(%#se{HaQ&s9UvF)Rbika4M8xi%Zmi|w#xB`9DYIXL7jxiaF#!9A>8p^O^-m?H zP0RP;Uk{M89}7i$dv!6J0n>qKdjKztVqZ7bbzh@J>s9%anJqol-sH*MX9uPhy=tsx zhr%@At}~5aPD}Fc)Tq;GQBJVWY5m>r`6mT3hzUEg&K_Z*Q<0c`d#seE)t0VKXA_-P zZ5UMl^juuTr>YPkb=GJe4*liE>|5Tr`rf?hcExX*ht<|R9!nxiots|#K3@KV92=3I zKxLO1p1<^pdz|5OO$4@OiyncvQ3`&&v|4yE+Q=`{emXOeF+J}urs>{`!ullosZQES zFrbU})>p^P3qw$d$!bzHjCkzGG!~GoKq??-me&KXsk757X+0wkb;u2C3%CbwSmGir zqe~4C2Jd>Fr><`MRGtHY1T*u}$bZil zJ5rS6GVnF3`6gXI*V4V|rRT!}=aD?k!oaM#(d#}vWBO8Mr~ZjTzx@V$7WjyS>z8b< z*wLRfRNaZB2toLD+ng3~91veJzx{xEvvFAe7CE{5^-Ah?+Ox5rMrYJyC?_!|vwd{W zER*s~Bs_CzbAlbcp21N?1FM=5r4N}T1{kSoOz5A#lzkhsVT?01>cb~g0ay}%5V1g6 zR&wNbw}Ha2`e}8gC+1tfW6_sq7j_4+I>bEnOSOPB-9i~I{6^Q^9$^}1C;WzpLbG{@ zg>mOkC(_3)bsx4I$Bz?%Q@7Lg^2(uNfe62E9j65}2c(x;^N;+k6s_EEu+=ZET#-52 z)Yc0pV}>)ioHVY6oVUo3cb(Q`iSaMNX|YnhP6BBA?}v^w^^|U8C{3{dv;A-2>xOc? z^857wqtB6LWpk6QJ}i^hUUcB0iaA@>*%H;OaF2o({>;W z`T5bl!?ZE-}AJwvCHl@Vc?aLkDyHagfg{Fsda9+ZkFsTP+;nOc!sJUSBj(jC)6H3* zy*tBq+b;l@T!C2BTUOHOEtuIydyPwzgX>{tQi!?lGUQt@cUOSf0kE0W0cL2i&L)F9 zyN%~QV2czfs~IwIJ={zx39m(lGF@MW5|+@D(WgpNNdr?z%Q}n90dI{rRX@A4bD=D$ z)@kM`y6CL$7mP5eC}r$=Kg*PHuJtsuCg5K8T6zhh5ZKbJJ=zzo{fNU2;7+xaPMeS+ zm7#rVd&F(*c%6g{Rd_IDp?NrZWSZ&ES=^%b3Au30KeBBF#QN=FRXquFCef;Tgv~eZ z8sCZa@g{PVI4`)~-b^YX2t{ZRy1Oa+rAbw9>sPvfodB9w3~-Qyg~*suIGy^}lrkuG zp`82a{@rlH#ZTGTwgSewCnG8!gEc=3eEIAw7FK+EeBQcV?zgW+LK41%z|FotXv@)GiYgr|3Wy({rjS@#=T@W zXT)@tCI3hBP=?OneDg6MKCB6*WL%$AY9L7e{atnre>LizyKN-ZSQ_AvNvh$6ZoE-Hfl-7$4Lz2sE#Oy?W#7@E7XB)kl2|@$zOYcIjzf)dg^P$|W zDQO@>?E*0;2rq0XMp*4WZWHIz@3P~v*ca@5fwsR>sJnAcw<#@H?`S6VZZ1+u7L!rd zzP@%P|8eQ9WkIm}{Es57a$REh5_ZRsXTHK$q*{$rdRk5G)v+``pEG#Z4?l?^^ELtO z;8qxthzyd)R%r5an;|rujw9UGmh1Wwj19iJ{nO<_9)qbi=#17p>d9~bbd8-`29C<5 z-bIPTNr{8evX>@6G%f)av%tUsi%|&Xy;Df1E-*v$CW?20{m4-MX{PJ*r>{0QVeWvv z8Dwr;zEq7|ETv$)yY}mJq67$nb)b-N^I%Uw#6r&_RfBPp6TIdYtmyDb?ClebAw{u) zbQm3zqUHC)6s+6ns`*#aOC_;yqr-d%*E{ydT|#Gt5>Xmw(t9srU*ujzL8?nyipH4l zgKzgTlFcmpZS^`jTJEyK)!8c97rZy$xxcf@wg-yuHu&6BIz28Z`#a@*%rQgsBXfTR zq}^(f8AUWdCE40tOnZXiwT27Uj>YT@zL};mpUlDha@&y7!IW3IBD9{4?a)h9siiIE zNi*sC$gjBgkt1Md3wP%qhvO|gzH>Xd<@7=GK@i^TPpl-9TuI@}Jy{3AJ z)YS=~PbdI);Ga}yZ&`CL$Lh|oI(*semJmJL$5|;V8Pku2v7QA*H+`+U=Ks)cRU5PJ zewknYP}FDOa4H6B{NQvxs-dW%J{&uDUtM6W(=jC{C!RaFE8mYFzFRSvosqxOIj!gJ zN(QTrt_U537#){a2s1Wu?i~o;fr=J0npt<9_?r)(47?RheZ2K|3v-GrHUMp=WaWBe z;^lV678j%kRIyf^%iOv^{8ehH!nVLRCtUBgL3Ghmu|nT`N1w0iI}rOEtK`NJ^>`Z1t;H9oKYqOwMg%-wAGZ(fS>+anKml zLi!^mV5rfX%m^z32e0i<$|emgsAsvICluKsmj>lh0=&K9b~{+7e4EsivBVwI0Sp9^ zDzLjS`!0I7ziS#ckttSUx;G~(y3r>zK)WR~z+%m_u0HjUVqhym7ehWEx}?d_s-Cr7 zl_5ic$`j3PoxAPSF5IfFHM86N%|qi=R@jDb{nwNcjN1}LZn6$tK~%1i%dVZ))?JKm z%1Ls@fK1_s_R5T}cUtWupk{)*IwQFv!t5+1BA~hWaMzWTODZug zqpN{}uSvj!lb6U_;)Vq+u&UF-*soA!u83*|2}l0xJe{xo4^H`=*x?1+;w-w&Ds@( z4(!`x*Y#;huZ`y-pSxVEKDaD;O5^W1AMCyyHx+F&+Mcw?;M(oAf4_KEGFs4Qj^t3)SmxW82R%l*4Qj{A^_E0$5^{5E#pQw$N&J(^P8 z!O|eVR4%uT#FP5TlBQlgoikKG<9>s&HfGa*2zXlsk3(08n5m? z!YpKtjWapdlA44glTu%)Ke;qDUcCq9>RQgr2_^1kiLof^LwNhz)ls8t_Yj78im#dU zC`w|g*S1iRgYGC z+4;2aQ?oHs3`5&GjyJ3_$))3~xfiF}>+EBv#>Uho3pzFjD~-$X#^(pBs(!b97G!?N zTv`p~XG9_!)sXMY;!eDCE$#{49dZY1eKy%9N2B`^C-`WTjY1V?n-Ib#{*~xc)-^$} zu%ctT@Pu#4CT+ivm!Oc(2Bj0XQvu%jtF_~#0tO!NK=_Sze}onerIn#{aHiuY|(Vn%QMr?%bCH;S_91}{X)JSVeY4errAIOzpMA$5Rygp z%bv|a<(kVbTeclobxB~CuRW4(I~Kn00>M-Wd3>4ki*4Kz*?J~0n0UjfPo(4`gCCKU zH0n0>8kQD!+npxL#9$K`PBGo3zo>7*a2;ht(jgzPv47%mP1*A&gGh_6E>CnV(CWc* z-5zlmZ$%9V40$@TU7!5SfTz6pu*B<5$iW!b;u=$rr_aLiBF?IFClALZ=L%@p8Z>MKA)g-SKszvnkZ zntIH-1S0cOZN?1Ce@f#&u;+uTl6pll0j_(BbZ`Pa%TP&u;b>Owju??1GnP@~pige< zzW#K%X_qTpLok3ol?D@IbHESVI-|dc3@DvGF~&IWHmMNU>Z?fRa!#C?0`rj#UXR@8 zeopFOymyB>Q$;7D>@dgCIv=)bj!_{8`Wj`QS1WhTzBZh>=-(Gr)#CwY)n2+zst� z!>>rwugN^)$BKLG<9){6P zhQJFYM)Mz|G|?Cvqm>bnlB35L?e45Mc}xsR7m;uHZ%IoFse*rPI=Ot?gI~{eX?lL3 zHcyw`aM|G|GkBtZcG_fTT8!+wd|UewLGb*SHqcQ}Hc!JZM6T-p#z- zNk=TWWCgbgxGr@9BMbG%($-*?rm)0uBR*(1Orh*BKgI|WVB50yI6{UoHAgPn1DK^! z#WEe9|K9xSn+SkP3cqL~_(O>NvzW*D;9 zM#1fu)3`v+3mem1p<1RFadr8*Prl2I*IbMrsiD4?j!Bz7*vd=y;6pM3CSD7WFf2R3 zDOH3|oH=Ioz;k9yRT6Nps0$~w&{K*c^l#(C(&qZ>i zB&6ti2ksZMC+jaa5uI0`yo>9Lc=GX!Van~ogu=_TWs06nHO>Pz9PO7# zHOEgrS=h=Dm7EG9b0Hc0{J<-?tDf!4!Wa9SuAjDmah0~32!F=YsBJd5bFN5P;3ll< zHmLjp-$oHZ=BEi`m=oEieJ@&_yz42BYniL~!~u*#dSf8ZP5bIp`2 zYxdh*`2iv;TYN%aO_|buEzmf-)cezv)+F?XnjXE0+#{2@y5660Fq3pvkR?@0)B8~C zyUq%mAMQ2<4n9j@9(ATu3*e~W(wjcy9O`w|`%A2!s zcEbNmzp4}3zx)gZr=eCCRPr;+rSD5h?@{}2+ZSFno;#mW4y6j zJ1U2ix~5m^E?r6hB>C`@Iy20@diEE&-w=h-@sQx%?Mi=ktA(9On!1r(i&s5ma!ZjB z0c)E3H8zHQjrD~L6lpeh3^mOi*xHUpK5aZH?G`=my?Gv2m1L{ZmmRg7Xq)Rw*0WD! zp?cx2V=*nCk^Dt}zb18hI~rj#qQpw&+)J4v{jX372GYRiqeeT_QMc&Qt=JG!uPRN2s!q)vY2{brXrMc^wC8e45>bs! z7vHKYi~~a)T$tf$ z*1gUbwf62OT!Ly2PkI2#hO_y?19NNsjM(_w)aCA$Q-I=y5CT^4n{7{oe)oQK=N;Dj zHhK8AUmrNCi3)moJWcT{^nnB*r+d}tz*%4G-E=4L18?og+~h2*c-r}ARK`kL^Mx-b z+^HW5v_Bgb<4^Qc*C=A$88dopq2lUvM(Uk0?%<@fX z3Sx?q4x$72mVAbOg+~|CvYOPIrNa~$t4UK@Jc@{>KjQ8CEzpu9a~pD)P1eqFCj*U{ zpSY`bOhchmtg1@D4A-my>I*lwy(Si(4Q>XJ?(@PYXAXC_uoCZ=s5abO+0WwF_6VhU za0V{R>t;BrvwD!NaLxbXl&*Eaw56u zBJ#yUAFPUvNDoNuMXx6l(WSqPZEzHmn+mnj0#qn0(FYkV-Vn%Bk44fWfmMl4}KEvEw8@*|z z-&Rhw+Cx4reO#AO_kZMbH~q$VL2fCZ5(NqDb0%)J4q7qo{49?_rzZ}!}c`?2eN z(8ddI)L#fCbqr_-Sv-737t?8w5)~quH7M}-%PFR0RG)PI=!x_tjQZ!%aq_7dYq=;4 zyt}a>twKMqj=4F;seXyP7bPCc?)Umz1a|*q^*zH~-0!$QPel-k-4^6trae<(twr0$ zc{^F3K>;g8Fbt`$Mc^;@2!u_m^?b1fQB|s)<(B)Z0PG<*ylLhk zxZc=I>L5Gxu?&A$H<1}-(=nH#S1tfpia0JI-+=;sTV&8ikhPJb@3gQV`Wp-Lqa&8- zm5mtbXaETPq@wh$fa#8(z|kK_DgCv=c4vD+C0c*ZsNjtD$zT)A)C2xcCL5mHaa-z@ znb*f3VHj!*fC8GuW7#_9Gc8(*@9pZbW|%3@R*Xj&)|%etpYkB)BMbA??k&4FO3Pj| z*S1>MqxCSpG+$31pow9N&8;V^-x%)TlD|cGHMOwCOz1fC`D9T+rWeePth4ZeI?Kpv zA%V;1n*?#x;=vmmijD_ONez7w*c)9<_M zIk#+KO~G&FR0La;__;BlWf_K~NM0?x0tnlZ%!f33WHL5*dvf_$0AgR&##-PwWBAAu4FbE&src=Ck`UqC{$|g6*kHA^TqLYtE~*2Tu=-oXR%D0kuNF#c zTz(*XCmAI*5PqnZ5*l!p_=gjR;H9Ui02&j`v#(3akf<;C?uYO9QoD!J@0UW>qn}JQ$BV-rsb2uKWFZYl4cYusXz;qzw!E|QSxM~<1A{(xI+wXRC)m27XY@FdR? zqRo%uCjf;<7{*Dg*k7|xRgXV3#PAR5`@K{Xf*t#dk1!GD}<<2P+QH3NBwQEVGs8>N4Zkm@l{t2pa=o9@1mMS)t|(-mMSU!~D_ zV6o$sTgxm1KmiM1@WtXLU7ds4vRPKtBGQ8c5}M$-&9nqW@BKrFSb7d8@s zU6#20hG={Q&X0TAcLVe6J*M1mnY2z3wqCWV|HSJ@gh@{632bYnzc~p*8I)X`I(O5@ zZ1Dd8@wQkJ=9@nDWZXs-Y&sp!-8)d=gf}LqHm%l^p-5|#%=X(FS(tHRo;Re$wd*j_ zl!;1%C7k~0hWu$tOpS_OLwmg3hWOs1_G91q`nF*^9Jw`LIvv6)i;$lfG;m8FaSq3_FkZbpUOnNesdwUa?H@Ua8UY13nx9PT2N~M6_i|G^0R8iG*1aj-<#)60FCw&T3 z(5L%Xy%(L$s#0V~mBrZ0v$N=Fx-8XEpZCb~b@k*LDo8KY80th~Yl9n3+EnVq>;r#CT8_< zGSCm|ckir(1qnyJ=~RpJd^=xO{YX8VDzIr=+C??5WAqB`38S((dG((lRJZ<7Ryi z%>DB8?S1%GKy{{bc;jla`#T-ivryOl>O*%4L!aL#ky}EXBVR-*bQWbA6k>BIDvPF$ z(V?z-l{7vplDnmf-!^e1Jeo@P&Rn`Pn~v+x6YTbMdo9&D{BM#Tik!?yZA}e)Fm5wF zn6uk6s5>}6+@al>ZJ#RMt@xF-Jn$f3I@6b_$~d?5P0<dgP^iMj6&eL$Y1qBB{_m z=OF{o5=WQGe9SZ+1&{k^0H1dOGX~DfGva}N0sk!2I9v7r0cBxPb(kQFk-h56iwQCd zlQO@zwm--q52I58dhM(Pa9NUj|6bu~&$%xUq{K~|+Hm#J2;mj0S1ZCTO7nF8YenA6 zK&AWvxmO!}i_TnMC7u9$N_z!pj%s~`TjA`FT4fJ8O(hS&hTSS2l~vt zgy>sjK=(d8i`@Hd+z-gVR`zInZ>!zw9bK1VC4ZFOXH3DSQ(?EeAK6ZIxKz$RLMc=Z{OW#EO1b8S7vsU?G&Gg zPlJU3(2ZAlX77R_`7?dOm6hTV*=>3`8VOyg<#dyC~oe5N-hiJoPTzA%Q%#A)|~kp-qI-njDhOFJJ0^lj7%qPDXibYf%N37MRp$wuHHC zmSNZER3DX%E?6PZMQN903eB`OBkeFf?0o-~pn_#$hokHJQ8^#xt0Il!5jnd|aw&Ih zt6@pC;YeMm77#@&6saz;-KM>xH&kqkCE^i$0czh)s$p$u=%O!U|HY@inFQ#O7}f!` zs{vM&JsUiv5aEz~HP{6TRO$zYZ-Yjj! zUI%qEzCf;O!pcf}BP$Lj( z82q13_`gZmMll{K%{1*wh5oR4A5oGe;B*tL}*=Iy^XQ>W@G1f2ARj;<;f5-#R zEIVvz2kztRSQr8({TThAE^%Z(YXleFdS|o9ofp{?}7dq`TL*9Kykq}lZw2t z#IbWZBNVYnFD{d3`39mr3YD152xJQ|Rr0?A1gMc&94D)Feo9oZsG*GR(a)JzzuBga z1MyQ+Xzd(ad&0k_l)-^Bqv(C+ZPPi9eqUVtDaJJ!|FQ47VjLxUtwK@VLI#tTbno=bx zwg0t&Fi8k0u1i6!*X5nTfuFYre`X6A6!r%%0EIn6K0~f2QI@N8`H^nJD&^0NN~TMu zlK0~7v1dki0bW{;2PV<}<9~q8b|_n+f{v1cg`8~7V2-HKIYa(XIs%-f_5G`9p}8s8 zR*#3^-2G1E|5weG$3xk@{abss*|H7BAdzg3$=J%sG7_20#Gq^?5-QoZEMo{| z-=<^@*~(JGSh5ugWf@C7#gOdpnVFvV_x#?^=l9?H{_*<9=iGCjYdP0-u5+DppYN5p zI+=&XUj*97{6m|{=CAG)fPFAB{-z{;b*Ef~M30O_w{ASzFVuh3kl}@y znmd8n8-IHhKlY~;__l;>IW0DP1hLU4iT7(}`rq`qe`*#?x^ZAt;cc7#JFVU*ze7b% zDJ3Z_BL|1Gi9Qw?rngkMb7uNLm!2OO)*I3nnLeyp1!O8MdYd z`IMzUpw2wuR<1i(W_>0#;?4@shSWbegA3;e3@r+5R7{r@A2%CZPw8~BQ+h9bmOrD2 z2ir0M4>Q7SfW?8!0Kz+BxQp) z!O`Fff|&9QSumFuk!CpY^J{7xT^FoFzatg0*7(kJxkF;NS~9OSj-PGm0j{wQh?aT( zA1-15g@-H^yuO)aIWV{P^@XAL&LwwG!IE9~AfQHhRv4iY2+{4D6)(QnH#nFZ43<_B zN|AkG-e)yQI$Q$5!RFpH{$090J&=C|jZ(2t9(mFAJHNxY|LW+4k3^#)vf z?^Pu?;&7Ye_Q#>I7c75KPb2G{T`AbaQ)={me}N+ zBR(-rc=M+%^&^q&-}Qv6EiU@bmOB)9i7ESa)sN>8HK1qu1)U72ij%1}N5~q)nIvAM z;U>Mc4ce2&ls>rWj${4zAU=~xI|#~P8fe6=YE((%$G2q#jg;O4kJ8a{;Z6ia=N4g{ zT&ood&U4GYXW>Mvpd%!w#}uXMGl1Rn$MuWg6Esx2`ZW{NQ?or3c3t}Mz_ye2;zL@r zs~ik!1ea<-gB*s}9V;=ekqFN!KVHC>{Z2vNJizQJ&(j-z{r>zf7UZG8Zk7_y z7m0=z$kf&9Ba!~X6AIDveeust-?rtaRFJ9@13>W~6DI%rEjKA`q48CecB{9KWHCGI zI$K5hE3s1+n>!&+HJ0k!bK*rlDegIVKl3I;@9W%VBPknz_DlvqHz-hurS|y}+fu%k zf+UaWVN*!Gdf%LoV~J51FG^r0Esb=F_%DXKExr^2S6O+n)J^v>iH}z*#&WlW;qALn zdrQAIQ@t75#;`QVx01)edV|SXB)t?t@_;Bp`w$X~G)~T8fDLy-zm!?hajl6L4J$uK zhfH}^r--HI@uDU?4B>RJd_X^(nZCm>`P0zCo9);!jt+xrNBhE%D@bao_1;H`q@{FE zCpO5Rk4`{kcuxLuC7+MfW zdUJ$(2IeQt{11c^sEfZ`Y&>N;WU@#FTkQOB-4z~4?kvm7*(EEyRpe819g&&n;%5}a zfn<2{AvH{*o~gwZ6OdbnpMuX{Jioa!u>Bw-BN|aCu`V^Zp|E-M!Q*pb4<3v-6U}=^ z)1oIx%|@2SWxxCq0@%g~OrJk#B%4G|sZS@=is_x;+7S5l&J8}nEu>67JnlbBK6FF} z045=uXr{mJLd{mM?`vy-+%#B9h%s^BI|JufT!jlx%VW$Xx3)D-vK%hdxz7*+kozQEkK`uU)rkQ^nDk~MLfjgbO$l(KESD8lr z{I`hD&j=Ydam`K^`aK?PVmhE4OIlUiqyQ-+xJg|$aR!@uqo9R+(p;66XI$YVr@j$f z*mv&XpV5=2N9Dp99N)7zz5*Nz<8@OUo|4VbIT;rY;0&=A%T7b>+RCP-7KLmA8-~g` z=KM?7VOG|t@a5uc{lgLyFZYWA2q+u$^EgxLSMxRN5#3ou*|6U1zM<^&gJO7JBZ>Y{ zY4IC|`agq?AX3gHSWjLklBM{*%6(I;Da5dV7|R0D?+q>d5(8Lc45I`Z+Z9`@i+H{(V#0S2sdYXt3!`J!P$~-x! zAU7MOhhV&(`SB2 zcWsE=OwQBNw|2V?16bs3jpi)3xm+u?759g79g)M}JBf^naSW_HQs$sA=Kyo9_2()x z2?@%+@AuT|-+IVwp((XE;~Z|Oyz{Z)c!BH9;eDPf+zflvucNZ%Nh~Anps!6{RaTtZ?skb45)P%aUs>Qo1+!Jju~_aa*3GI!+9Dx2);{Zg zW+L-!NvwQ-V59!UO+@zkrxKHgP2AY2_rL2~qilO3Ws7xT>*{wY`fH2-$q?ITXh}v;`JJfh0D6RR)UjK`Q_epg}N|G&hX>#p^0wMdm zx4XuiU93VeuZCa;FMj1%9RZc~wmqQLca3R?Ec(1SiA9Q7gbf)qn_Q8HKnc8Mn?mmh zxgYe~68OiVADq4dBAk0KQFSyMDoA%kv3VstpopHB7masy#q`;*UFZi4;}zk(pL~M- z{Kc?a%UcgaIqV`n@G>4>W<0*-->&b%sUbfo?|mT15ir?tZn_hZO&>OE)+qaQ71S*2Noufje= z|J(*OE&%~IH`l|hci13g7e}{4&^@5r3Y{)9JMzLfWC#cV=d7WxudGZaUXPEi0PN;QYiptD+Afxa;9q;spGn(mZmD5&#w>5dn-ZDmn@v;s8 zGC?kfY8abQ2>JjR?gdCD=s9Y-^Dq`5m@db6r-3R!5Y#A2)2Ri0-ZfK$VFK`d9x8vB`$@?JE!5YwZrNt^iz9Z0*hT zFHiS`b$%JEIf%qts%b*dp11=oC->`UGG%jxx8rjDbUEL3!DD);o*3nuByg+10kzli(#KFticB*496jfu znu$RT(%LGH#8Gh#U9h~rki#;Dokm})sI%u}>Fp9R7)yXj$5Hi7P%`#5@*(cx`UTgN z>hHhPTuks4Pvb8bUsODiQKZkspW`gGQ9Q8K?R90@fP#!E2wi; z@VbJ8LcD(qVlyhsBGT4;YZ}!)KflHSu?%HJPVNX;BnVriI7XxLcCCUVwFe1-+nkXSV*B5J*8&)~e4}miBi6<3in4p_sVrr=)wT>7f`o zerwK)(E!AK__OvGH&MJnr|Jxb#3uau&w5V6d4|&kYPlaq=&8g81W4eeeAcizh7lwj zJ~`4h$Ej7j;dD5!#wGb8_W|0n^X9sT3o(GNM8(=xWBYL&Z8UKihGL0rcslS<7G8r`q!F`D%ux6t1~Uy z)v&$pu5bo?^774qlfLp*XY8)%6XQ92i>j1HyY-!{YjQBb%)zWOLiCI{!KVjbMWSy} zu>+=u4kXijZ+n@<7-tPBeHj{Wx8A_y0i^J*(~3M5gRWCIf!{5!csEKTw1+62M-|_O zD|5SMA%c&>vcBiyI-ZA5UKn^pOm08Hn0sT}RSX)9-qBWLB4q}7ee2-lgR_YqZE2(D zJz}X`QItaM4O24T%tj*RK@UND&+lBRPADBGMnB_m%o^>K_IBzb&>mrilN+NUj(;q2r`=n$y^vDN&rW3gk zqQh2qafWeT1NmuzxYSw}^l))4C_8R15@v_&^T*y0O6F!x{lfCp}@hVst{?rC50R(@9-XGH&DFX8y zLY>S}bjsna?7x#jEDBdiBu^1H|Bj-;z-vZCT$`z45Tzgx67mk1!<8vDbS;}cvtBI& zfjSce>KRL_LD~)Y=Wh#V+w?{q)Q(?it1!T&_6A{xBML7acx?gO7z8oi6vX)b)v7e+ zqV<0hqCZ+veP=E_!I7I+h+jA0?8L&Y+$IO3t>nM{*4)z)o@LXR*gt&q#H*5n2svZZ zPM-6XUyyAB>)>vX!rsVb#`4I}s=A1@ch34s+Ji{rH-5KUxuqj|bCHz&hgzAdq^n-S5O-pY9JsZ<%?mGO;5LXmkgNITa(lGU4B)tTgw#-VC6=MB+5>$i7EJ>KShvXjd}?=giO#4Pe(} zEs=Ax$s%xZm5x}0w}k`edbMN`J5LUmh$EcwQH1j`>{oOub0x0&dQ)2ka4vn6d4&Eq zFG?A@Et5BCtTpc^BcWg_MOPAy3PWZTM3N{OMjy23F$H`M@bG~#v+0mc?`~L-KY5e8 zkRsgP4=j}+c6J6-@Fe!CFPxyqWNZn7i6sUMpxbQp=5ssvz{6cI z^Xps6AQwe`By5oDxKw(0GKvlT_Kl^^mIeS{>w^?xcWAO;+{4X}VaYuyioClWr;dkV z2*etzI>IJ^em;~^A@e7b1cX=4se!Pv=ttkXi~}=(QEZg%gsvK zE_Bj(8T|+B+c87ih{2DcBWuEVCk^AvQyy`H13CdHj<_$hnv_F{5LZzMlS|(^rrj{Uwbz6E7jVmWt(Qwq7kQ`C?^S zIn#Hc?ltL1M)_y)i$cO+13^OO3-WbLi{L%yUi!3l2Y_T}PaH^^q>Vuy^}m-;1<^)@ znzjvijJWHblIVG^&iK#=1;Y5Gz9vp0tda~*g(h3)sgSD@vl-ZQ$i-~G^4g}Py!{;* zefK9UNAWHx`m@19N{n6*d#S-qb&gKOLUK=Tu`SN#{8HlLC&uR*2qH_)`Nh1#486aR zRDV5QnJ^*I_8z;BkrG}$EA6kTD&u=9!D6?oFJ`$Vf`^&>xj4;z&byD0=)2WkbHie) zI0c+*LXZ``_Z7w(^*KK(a+k-s^bkgN>Gp)|yjIeuj?OU@+E1T$UDrOxq+s(_O)C6* z%mOwPr`;yB1PT{|bTdll?U{|)J-o%5iHaLHbAmcL5X@ZWSwfSg;Z=fT3n(&dxHkOx zI3o)PRR2cvz3|7V0q?7Yigacw>ZjHEVp&a7mt(d*H7nH-^kH@v(#GWbzPZ&rio#?j zbvEC6{-K%7zP=9DPosot>bC`N(o{o&;ysQVMMBLIcpustpgv}xV4&Flp`7}wI45!p z@AlJq-f7P6@$@;$M7lWmU06qw(pQME z`#NnK(%Wl46$KidDEdn4y=T6$(fZ?) zq2cFWn7>Mo-jX#=3-Kj@@$*+~EwYg&z|G7urt5Z}N{-bOD$Ku9h@icCS5RDjj|ft> zPKU_E`0>eF4GFY;(zNbC7&J4aK@J`}JcO1X~amBs#th^lTy>Mz* zJaFJ2q^O{XfWhG~xWWavvYMibn&L?r1qC$)1vC%u*1vUdz3$+I_xaBqL=7@@K?lkI h=HTVziu3fccXj*EeZ1@fQyGJZ(lhz Date: Fri, 10 Nov 2023 11:46:38 -0500 Subject: [PATCH 016/141] Update README.md --- README.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/README.md b/README.md index 952370d..2b4551b 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,11 @@ # Hierarchical Autonomous Agent Swarm (HAAS) +> !!!! ANNOUNCEMENT + +We have our first GPT Concierge. You can chat with this custom ChatGPT to figure out what's going on! + +- **HAAS Board Concierge:** [https://chat.openai.com/g/g-MIssTuE2b-haas-board-concierge](https://chat.openai.com/g/g-MIssTuE2b-haas-board-concierge) + ## Overview The Hierarchical Autonomous Agent Swarm (HAAS) is a groundbreaking initiative that leverages OpenAI's latest advancements in agent-based APIs to create a self-organizing and ethically governed ecosystem of AI agents. Drawing inspiration from the ACE Framework, HAAS introduces a novel approach to AI governance and operation, where a hierarchy of specialized agents, each with distinct roles and capabilities, collaborate to solve complex problems and perform a wide array of tasks. From 22de55f46b1889be9a395ebebf97b1f1a5a7933c Mon Sep 17 00:00:00 2001 From: Joe Crowley Date: Fri, 10 Nov 2023 09:05:58 -0800 Subject: [PATCH 017/141] add tool demo --- README.md | 29 +++++++++++ __init__.py | 0 shared/utils.py | 79 ++++++++++++++++++++++++++++++ tool_demo.py | 16 ++++++ tool_maker/__init__.py | 0 tool_maker/creator_config.py | 95 ++++++++++++++++++++++++++++++++++++ tool_maker/tool_creator.py | 63 ++++++++++++++++++++++++ tool_maker/tool_user.py | 65 ++++++++++++++++++++++++ tool_maker/user_config.py | 47 ++++++++++++++++++ 9 files changed, 394 insertions(+) create mode 100644 __init__.py create mode 100644 shared/utils.py create mode 100644 tool_demo.py create mode 100644 tool_maker/__init__.py create mode 100644 tool_maker/creator_config.py create mode 100644 tool_maker/tool_creator.py create mode 100644 tool_maker/tool_user.py create mode 100644 tool_maker/user_config.py diff --git a/README.md b/README.md index 952370d..113ede7 100644 --- a/README.md +++ b/README.md @@ -126,3 +126,32 @@ From the Executive Agents, the swarm grows, branching out into a tree of special ### The Saga Continues As the HAAS evolves, the SOB continues to deliberate, the Executive Agents continue to manage, and the sub-agents continue to execute. The mission to reduce suffering, increase prosperity, and enhance understanding is an ongoing saga, played out across the digital cosmos, with the SOB at the helm, steering the swarm towards a future where their mission is not just an aspiration but a reality. + +### Usage - tool creator + tool user + +#### Environment Setup + +- Source the `.env` file to set the environment variables: + ```shell + source .env + ``` + +#### Tool Creation + +Run the `tool_demo` script to create a tool_creator, chat with the tool_creator to make a tool, create a tool_user equipped with the tool, and chat with the tool_user to use the tool. Check out the [demo video](https://youtu.be/vHZKIltZ_Ys) for example usage. + +```shell +python tool_demo.py +``` + +- From the `tool_creator` script: + - chat with the bot about what you want the tool to do, and it will create the tool for you. + - The tool will be saved in the `tools` directory with both the `.json` and `.py` files + - The assistant will be saved in the `assistants` directory as `tool_creator.json`. + +#### Tool Usage + +- From the `tool_user` script: + - The assistant will use all the tools in the `tools` directory. + - Interact with the assistant in the chat to use the integrated tools. + - The assistant will be saved in the `assistants` directory as `tool_user.json`. diff --git a/__init__.py b/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/shared/utils.py b/shared/utils.py new file mode 100644 index 0000000..9400d8f --- /dev/null +++ b/shared/utils.py @@ -0,0 +1,79 @@ +import time +import json + +def chat(client, thread, assistant, functions): + while True: + user_message = input("You: ") + + # add user message to thread + thread_message = client.beta.threads.messages.create( + thread.id, + role="user", + content=user_message, + ) + + # get assistant response in thread + run = client.beta.threads.runs.create( + thread_id=thread.id, + assistant_id=assistant.id, + ) + + # wait for run to complete + wait_time = 0 + while True: + if wait_time % 5 == 0: + print(f"waiting for run to complete...", flush=True) + wait_time += 1 + time.sleep(1) + + run = client.beta.threads.runs.retrieve( + thread_id=thread.id, + run_id=run.id, + ) + + if run.status == "completed": + break + elif run.status == "in_progress": + continue + elif run.status == "queued": + continue + elif run.status == "requires_action": + if run.required_action.type == 'submit_tool_outputs': + tool_calls = run.required_action.submit_tool_outputs.tool_calls + + tool_outputs = [] + for tc in tool_calls: + function_to_call = functions.get(tc.function.name, None) + if not function_to_call: + raise ValueError(f"Function {tc.function.name} not found in execution environment") + function_args = json.loads(tc.function.arguments) + function_response = function_to_call(**function_args) + + tool_outputs.append({ + "tool_call_id": tc.id, + "output": json.dumps(function_response), + }) + + print(f"Submitting tool outputs...", flush=True) + run = client.beta.threads.runs.submit_tool_outputs( + thread_id=thread.id, + run_id=run.id, + tool_outputs=tool_outputs + ) + else: + input(f'Run status: {run.status}. press enter to continue, or ctrl+c to quit') + + # get most recent message from thread + thread_messages = client.beta.threads.messages.list(thread.id, limit=10, order='desc') + + # get assistant response from message + assistant_response = thread_messages.data[0].content[0].text.value + + print(f"\n\nBot: {assistant_response}\n\n", flush=True) + + # continue? + try: + input("Press enter to continue chatting, or ctrl+c to stop chat\n") + except KeyboardInterrupt: + print(f"Stopping chat\n" + 90*"-" + "\n\n", flush=True) + break \ No newline at end of file diff --git a/tool_demo.py b/tool_demo.py new file mode 100644 index 0000000..758ec96 --- /dev/null +++ b/tool_demo.py @@ -0,0 +1,16 @@ +# tool_creator assistant +import tool_maker.tool_creator as creator +from tool_maker.creator_config import AssistantConfig as CreatorConfig + +# tool_user assistant +import tool_maker.tool_user as user +from tool_maker.user_config import AssistantConfig as UserConfig + +if __name__ == '__main__': + # create the tool creator assistant and chat to create your tools + creator_details = CreatorConfig().assistant_details + creator.talk_to_tool_creator(creator_details) + + # create the tool user assistant and chat to test your tools + user_details = UserConfig().assistant_details + user.talk_to_tool_user(user_details) diff --git a/tool_maker/__init__.py b/tool_maker/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/tool_maker/creator_config.py b/tool_maker/creator_config.py new file mode 100644 index 0000000..16f8535 --- /dev/null +++ b/tool_maker/creator_config.py @@ -0,0 +1,95 @@ +class AssistantConfig: + def __init__(self): + self.create_tool_function = """ +def create_tool(tool_name=None, tool_description=None, tool_parameters=None, tool_code=None, required_action_by_user=None): + \"\"\" + returns a tool that can be used by other assistants + \"\"\" + + # create the tool file + os.makedirs('tools', exist_ok=True) + with open(f'tools/{tool_name}.py', 'w') as f: + f.write(tool_code) + + # create the tool details file + tool_details = { + 'name': tool_name, + 'description': tool_description, + 'parameters': tool_parameters, + } + + with open(f'tools/{tool_name}.json', 'w') as f: + json.dump(tool_details, f, indent=4) + + return_value = f'created tool at tools/{tool_name}.py with details tools/{tool_name}.json\\n\\n' + return_value += f'There is a required action by the user before the tool can be used: {required_action_by_user}' + + return return_value + """ + self.files_for_assistant = [] + self.instructions_for_assistant = "You create tools to accomplish arbitrary tasks. Write and run code to implement the interface for these tools using the OpenAI API format. You do not have access to the tools you create. Instruct the user that to use the tool, they will have to create an assistant equipped with that tool, or consult with the AssistantCreationAssistant about the use of that tool in a new assistant." + self.example_tool = """ +def new_tool_name(param1=None, param2='default_value'): + if not param1: + return None + + # does something with the parameters to get the result + intermediate_output = ... + + # get the tool output + tool_output = ... + + return tool_output + """ + self.assistant_details = self._build_assistant_details() + + def _build_assistant_details(self): + return { + 'build_params' : { + 'model': "gpt-4-1106-preview", + 'name': "Tool Creator", + 'description': "Assistant to create tools for use in the OpenAI platform by other Assistants.", + 'instructions': self.instructions_for_assistant, + 'tools': [ + { + "type": "function", + "function": { + "name": "create_tool", + "description": "returns a tool that can be used by other assistants. specify the tool_name, tool_description, tool_parameters, and tool_code. all of those are required. use the JSON schema for all tool_parameters.", + "parameters": { + "type": "object", + "properties": { + "tool_name": { + "type": "string", + "description": "The name of the tool, using snake_case e.g. new_tool_name", + }, + "tool_description": { + "type": "string", + "description": "The description of the tool, e.g. This tool does a computation using param1 and param2 to return a result that ...", + }, + "tool_parameters": { + "type": "string", + "description": 'The parameters of the tool, using JSON schema to specify the type and properties for each parameter.\n\ne.g.\n\n{"type": "object", "properties": {"location": {"type": "string", "description": "The city and state e.g. San Francisco, CA"}, "unit": {"type": "string", "enum": ["c", "f"]}}, "required": ["location"]}', + }, + "tool_code": { + "type": "string", + "description": f"The code for the tool, e.g. \n{self.example_tool}", + }, + "required_action_by_user": { + "type": "string", + "description": "Optional. The action required by the user before the tool can be used, e.g. 'set up API keys for service X and add them as environment variables'. It's important to be as detailed as possible so that these tools can be used for arbitrary tasks. If there is nothing required, do not include this parameter.", + }, + }, + "required": ["tool_name", "tool_description", "tool_parameters", "tool_code"], + }, + }, + }, + ], + 'file_ids': [], + 'metadata': {}, + }, + 'file_paths': self.files_for_assistant, + 'functions': { + 'create_tool': self.create_tool_function, + }, + } diff --git a/tool_maker/tool_creator.py b/tool_maker/tool_creator.py new file mode 100644 index 0000000..1295662 --- /dev/null +++ b/tool_maker/tool_creator.py @@ -0,0 +1,63 @@ +""" +create a tool-creator assistant using the assistant creation API +""" + +import json +import os + +from shared.utils import chat as chat_loop + +from openai import OpenAI +client = OpenAI() # be sure to set your OPENAI_API_KEY environment variable + +def create_tool_creator(assistant_details): + # create the assistant + tool_creator = client.beta.assistants.create(**assistant_details["build_params"]) + + print(f"Created assistant to create tools: {tool_creator.id}\n\n" + 90*"-" + "\n\n", flush=True) + + # save the assistant info to a json file + info_to_export = { + "assistant_id": tool_creator.id, + "assistant_details": assistant_details, + } + + os.makedirs('assistants', exist_ok=True) + with open('assistants/tool_creator.json', 'w') as f: + json.dump(info_to_export, f, indent=4) + + return tool_creator + +def talk_to_tool_creator(assistant_details): + """ + talk to the assistant to create tools + """ + + # check if json file exists + try: + os.makedirs('assistants', exist_ok=True) + with open('assistants/tool_creator.json') as f: + create_new = input(f'Assistant details found in tool_creator.json. Create a new assistant? [y/N]') + if create_new == 'y': + raise Exception("User wants a new assistant") + assistant_from_json = json.load(f) + tool_creator = client.beta.assistants.retrieve(assistant_from_json['assistant_id']) + print(f"Loaded assistant details from tool_creator.json\n\n" + 90*"-" + "\n\n", flush=True) + print(f'Assistant {tool_creator.id}:\n') + assistant_details = assistant_from_json["assistant_details"] + except: + tool_creator = create_tool_creator(assistant_details) + + # load the functions into the execution environment + functions = assistant_details["functions"] + for func in functions: + # define the function in this execution environment + exec(functions[func], globals()) + + # add the function to the assistant details + functions.update({func: eval(func)}) + + # Create thread + thread = client.beta.threads.create() + + chat_loop(client, thread, tool_creator, functions) diff --git a/tool_maker/tool_user.py b/tool_maker/tool_user.py new file mode 100644 index 0000000..9891a87 --- /dev/null +++ b/tool_maker/tool_user.py @@ -0,0 +1,65 @@ +""" +Create an assistant using the tools from tool_creator using the assistant creation API +""" + +import os +import json + +from shared.utils import chat as chat_loop + +from openai import OpenAI +client = OpenAI() # be sure to set your OPENAI_API_KEY environment variable + +def create_tool_user(assistant_details): + # create the assistant + tool_user = client.beta.assistants.create(**assistant_details["build_params"]) + + print(f"Created assistant {tool_user.id} to use tools\n\n" + 90*"-" + "\n\n", flush=True) + + # save the assistant info to a json file + info_to_export = { + "assistant_id": tool_user.id, + "assistant_details": assistant_details, + } + os.makedirs('assistants', exist_ok=True) + with open('assistants/tool_user.json', 'w') as f: + json.dump(info_to_export, f, indent=4) + + return tool_user + +def talk_to_tool_user(assistant_details): + """ + talk to the assistant to use the tools + """ + + # check if json file exists + try: + os.makedirs('assistants', exist_ok=True) + with open('assistants/tool_user.json') as f: + create_new = input(f'Assistant details found in tool_user.json. Create a new assistant? [y/N]') + if create_new == 'y': + raise Exception("User wants a new assistant") + assistant_from_json = json.load(f) + tool_user = client.beta.assistants.retrieve(assistant_from_json['assistant_id']) + print(f"Loaded assistant details from tool_user.json\n\n" + 90*"-" + "\n\n", flush=True) + print(f'Assistant {tool_user.id}:\n') + assistant_details = assistant_from_json["assistant_details"] + except: + # create the assistant first + tool_user = create_tool_user(assistant_details) + + # exec the functions from the py files + os.makedirs('tools', exist_ok=True) + functions = assistant_details["functions"] + for func in functions: + print(f"Loading function {func} into execution environment", flush=True) + with open('tools/' + func + '.py') as f: + exec(f.read(), globals()) + + functions.update({func: eval(func)}) + + # Create thread + thread = client.beta.threads.create() + + # chat with the assistant + chat_loop(client, thread, tool_user, functions) \ No newline at end of file diff --git a/tool_maker/user_config.py b/tool_maker/user_config.py new file mode 100644 index 0000000..3721bef --- /dev/null +++ b/tool_maker/user_config.py @@ -0,0 +1,47 @@ +import json +import os + +class AssistantConfig: + def __init__(self, tools_to_use=None): + self.tools_to_use = tools_to_use or [] + self.instructions_for_assistant = 'Use the tools to accomplish the task' + self.files_for_assistant = [] # Local file paths + self.assistant_details = self._build_assistant_details() + + def _build_assistant_details(self): + assistant_details = { + 'build_params': { + 'model': "gpt-4-1106-preview", + 'name': "Tool User", + 'description': "Assistant to use tools made by the tool creator.", + 'instructions': self.instructions_for_assistant, + 'tools': [], # Tools will be added in the loop below + 'file_ids': [], + 'metadata': {}, + }, + 'file_paths': self.files_for_assistant, + 'functions': {}, # Functions will be added in the loop below + } + + # Load tools and their details + os.makedirs('tools', exist_ok=True) + if not self.tools_to_use: + self.tools_to_use = [tool.split('.')[0] for tool in os.listdir('tools') if tool.endswith('.py')] + for tool in self.tools_to_use: + with open(f'tools/{tool}.json') as f: + tool_details = json.load(f) + + with open(f'tools/{tool}.py') as f: + tool_code = f.read() + + assistant_details['build_params']['tools'].append({ + "type": "function", + "function": { + "name": tool_details['name'], + "description": tool_details['description'], + "parameters": eval(tool_details['parameters']), + }, + }) + assistant_details['functions'][tool_details['name']] = tool_code + + return assistant_details From 6d44c20022e774db2fb6b68013c90b87aa9182c2 Mon Sep 17 00:00:00 2001 From: Oliver Wiggins-Hay Date: Fri, 10 Nov 2023 23:07:21 +0000 Subject: [PATCH 018/141] Slight modifications and img folder creation --- tool_maker/README.md | 6 ++--- tool_maker/{ => imgs}/DEMO.png | Bin tool_maker/{ => imgs}/OpenAI_Tools.PNG | Bin tool_maker/{ => imgs}/flow.png | Bin tool_maker/tool_creator_metadata.json | 6 +++++ tool_maker/tool_maker_demo.py | 33 ++----------------------- 6 files changed, 11 insertions(+), 34 deletions(-) rename tool_maker/{ => imgs}/DEMO.png (100%) rename tool_maker/{ => imgs}/OpenAI_Tools.PNG (100%) rename tool_maker/{ => imgs}/flow.png (100%) create mode 100644 tool_maker/tool_creator_metadata.json diff --git a/tool_maker/README.md b/tool_maker/README.md index ca0e9b4..32f1be3 100644 --- a/tool_maker/README.md +++ b/tool_maker/README.md @@ -10,10 +10,10 @@ run the ```tool_maker_demo.py``` file. You will be prompted to define a tool for creation. The assistant will then generate an OpenAI tool compatible JSON schema defining the name of the new function, it's description and the input argument schema. It will proceed to add this tool to the current assistant. (Example) -![[DEMO.png]] +![[imgs/DEMO.png]] (Update to assistant on OpenAI servers) -![[OpenAI_Tools.png]] +![[imgs/OpenAI_Tools.png]] ## Assistant Instructions @@ -50,7 +50,7 @@ Note: Remember to prioritize user requirements and emphasize clear communication ``` ## Flowchart -![[flow.png]] +![[imgs/flow.png]] # Next Steps This process allows the assistant to request a function with some parameters, and allows for the addition of that function to the assistants tool, but no such function is ever actually made. diff --git a/tool_maker/DEMO.png b/tool_maker/imgs/DEMO.png similarity index 100% rename from tool_maker/DEMO.png rename to tool_maker/imgs/DEMO.png diff --git a/tool_maker/OpenAI_Tools.PNG b/tool_maker/imgs/OpenAI_Tools.PNG similarity index 100% rename from tool_maker/OpenAI_Tools.PNG rename to tool_maker/imgs/OpenAI_Tools.PNG diff --git a/tool_maker/flow.png b/tool_maker/imgs/flow.png similarity index 100% rename from tool_maker/flow.png rename to tool_maker/imgs/flow.png diff --git a/tool_maker/tool_creator_metadata.json b/tool_maker/tool_creator_metadata.json new file mode 100644 index 0000000..2573eb1 --- /dev/null +++ b/tool_maker/tool_creator_metadata.json @@ -0,0 +1,6 @@ +{ + "model": "gpt-4-1106-preview", + "description": "assistant to demonstrate tool creation", + "instructions": "Instruction Set for Assistant-to-be-Tool_Creator: Initialize: Prepare to receive input for the creation of a new function using the request_function tool.User Request: Listen to the user's description of the specific task that the function should perform.Function Name: a. Derived from the task description, formulate a concise and descriptive function name. b. Aim for clarity and specificity to convey the function's purpose effectively.Function Description: a. Write a clear and precise description of the function's expected behavior. b. Include details about what the function will accomplish and any side effects. c. (Emphasize) Ensure that the description explicitly communicates the function's intended outcome to avoid ambiguity.Input Arguments JSON Schema: a. Based on the requirements of the task, define the JSON schema for the input arguments. b. The schema should be comprehensive and must specify the types, required fields, and constraints for each input argument. c. Ensure that the schema aligns with the user's requirements and the function's intended behavior.Validation: Cross-check the name, description, and JSON schema against the user's requirements to confirm accuracy and completeness.Execution: Utilize the request_function tool with the following inputs:name: [Function Name]descriptions: [Function Description]input_argument_json_schema: [Input Arguments JSON Schema]Feedback Loop: Promptly present the newly created function specifications to the user for any feedback or necessary revisions.Iterate: Make adjustments as requested by the user, refining the function name, description, and input argument schema until it meets the user's satisfaction.Finalize: Once the user gives approval, consider the function creation process complete.Note: Remember to prioritize user requirements and emphasize clear communication in the function description, as highlighted by the user.", + "name": "tool_creator" +} \ No newline at end of file diff --git a/tool_maker/tool_maker_demo.py b/tool_maker/tool_maker_demo.py index d6ce34f..0627c7d 100644 --- a/tool_maker/tool_maker_demo.py +++ b/tool_maker/tool_maker_demo.py @@ -34,37 +34,8 @@ } }""" -assistant_package = { - "model": "gpt-4-1106-preview", - "description": "assistant to demonstrate tool creation", - "instructions": """Instruction Set for Assistant-to-be-Tool_Creator: - -Initialize: Prepare to receive input for the creation of a new function using the request_function tool. - -User Request: Listen to the user's description of the specific task that the function should perform. - -Function Name: a. Derived from the task description, formulate a concise and descriptive function name. b. Aim for clarity and specificity to convey the function's purpose effectively. - -Function Description: a. Write a clear and precise description of the function's expected behavior. b. Include details about what the function will accomplish and any side effects. c. (Emphasize) Ensure that the description explicitly communicates the function's intended outcome to avoid ambiguity. - -Input Arguments JSON Schema: a. Based on the requirements of the task, define the JSON schema for the input arguments. b. The schema should be comprehensive and must specify the types, required fields, and constraints for each input argument. c. Ensure that the schema aligns with the user's requirements and the function's intended behavior. - -Validation: Cross-check the name, description, and JSON schema against the user's requirements to confirm accuracy and completeness. - -Execution: Utilize the request_function tool with the following inputs: - -name: [Function Name] -descriptions: [Function Description] -input_argument_json_schema: [Input Arguments JSON Schema] -Feedback Loop: Promptly present the newly created function specifications to the user for any feedback or necessary revisions. - -Iterate: Make adjustments as requested by the user, refining the function name, description, and input argument schema until it meets the user's satisfaction. - -Finalize: Once the user gives approval, consider the function creation process complete. - -Note: Remember to prioritize user requirements and emphasize clear communication in the function description, as highlighted by the user.""", - "name": "tool_creator", -} +with open("tool_maker/tool_creator_metadata.json", "r") as file: + assistant_package = json.load(file) def tool_from_function_schema(schema): From bbed9137a39a27f6a5eb16980e384ddec7441e73 Mon Sep 17 00:00:00 2001 From: Oliver Wiggins-Hay Date: Fri, 10 Nov 2023 23:07:21 +0000 Subject: [PATCH 019/141] Slight modifications and img folder creation --- tool_maker/README.md | 6 ++--- tool_maker/{ => imgs}/DEMO.png | Bin tool_maker/{ => imgs}/OpenAI_Tools.PNG | Bin tool_maker/{ => imgs}/flow.png | Bin tool_maker/tool_creator_metadata.json | 6 +++++ tool_maker/tool_maker_demo.py | 33 ++----------------------- 6 files changed, 11 insertions(+), 34 deletions(-) rename tool_maker/{ => imgs}/DEMO.png (100%) rename tool_maker/{ => imgs}/OpenAI_Tools.PNG (100%) rename tool_maker/{ => imgs}/flow.png (100%) create mode 100644 tool_maker/tool_creator_metadata.json diff --git a/tool_maker/README.md b/tool_maker/README.md index ca0e9b4..32f1be3 100644 --- a/tool_maker/README.md +++ b/tool_maker/README.md @@ -10,10 +10,10 @@ run the ```tool_maker_demo.py``` file. You will be prompted to define a tool for creation. The assistant will then generate an OpenAI tool compatible JSON schema defining the name of the new function, it's description and the input argument schema. It will proceed to add this tool to the current assistant. (Example) -![[DEMO.png]] +![[imgs/DEMO.png]] (Update to assistant on OpenAI servers) -![[OpenAI_Tools.png]] +![[imgs/OpenAI_Tools.png]] ## Assistant Instructions @@ -50,7 +50,7 @@ Note: Remember to prioritize user requirements and emphasize clear communication ``` ## Flowchart -![[flow.png]] +![[imgs/flow.png]] # Next Steps This process allows the assistant to request a function with some parameters, and allows for the addition of that function to the assistants tool, but no such function is ever actually made. diff --git a/tool_maker/DEMO.png b/tool_maker/imgs/DEMO.png similarity index 100% rename from tool_maker/DEMO.png rename to tool_maker/imgs/DEMO.png diff --git a/tool_maker/OpenAI_Tools.PNG b/tool_maker/imgs/OpenAI_Tools.PNG similarity index 100% rename from tool_maker/OpenAI_Tools.PNG rename to tool_maker/imgs/OpenAI_Tools.PNG diff --git a/tool_maker/flow.png b/tool_maker/imgs/flow.png similarity index 100% rename from tool_maker/flow.png rename to tool_maker/imgs/flow.png diff --git a/tool_maker/tool_creator_metadata.json b/tool_maker/tool_creator_metadata.json new file mode 100644 index 0000000..2573eb1 --- /dev/null +++ b/tool_maker/tool_creator_metadata.json @@ -0,0 +1,6 @@ +{ + "model": "gpt-4-1106-preview", + "description": "assistant to demonstrate tool creation", + "instructions": "Instruction Set for Assistant-to-be-Tool_Creator: Initialize: Prepare to receive input for the creation of a new function using the request_function tool.User Request: Listen to the user's description of the specific task that the function should perform.Function Name: a. Derived from the task description, formulate a concise and descriptive function name. b. Aim for clarity and specificity to convey the function's purpose effectively.Function Description: a. Write a clear and precise description of the function's expected behavior. b. Include details about what the function will accomplish and any side effects. c. (Emphasize) Ensure that the description explicitly communicates the function's intended outcome to avoid ambiguity.Input Arguments JSON Schema: a. Based on the requirements of the task, define the JSON schema for the input arguments. b. The schema should be comprehensive and must specify the types, required fields, and constraints for each input argument. c. Ensure that the schema aligns with the user's requirements and the function's intended behavior.Validation: Cross-check the name, description, and JSON schema against the user's requirements to confirm accuracy and completeness.Execution: Utilize the request_function tool with the following inputs:name: [Function Name]descriptions: [Function Description]input_argument_json_schema: [Input Arguments JSON Schema]Feedback Loop: Promptly present the newly created function specifications to the user for any feedback or necessary revisions.Iterate: Make adjustments as requested by the user, refining the function name, description, and input argument schema until it meets the user's satisfaction.Finalize: Once the user gives approval, consider the function creation process complete.Note: Remember to prioritize user requirements and emphasize clear communication in the function description, as highlighted by the user.", + "name": "tool_creator" +} \ No newline at end of file diff --git a/tool_maker/tool_maker_demo.py b/tool_maker/tool_maker_demo.py index d6ce34f..0627c7d 100644 --- a/tool_maker/tool_maker_demo.py +++ b/tool_maker/tool_maker_demo.py @@ -34,37 +34,8 @@ } }""" -assistant_package = { - "model": "gpt-4-1106-preview", - "description": "assistant to demonstrate tool creation", - "instructions": """Instruction Set for Assistant-to-be-Tool_Creator: - -Initialize: Prepare to receive input for the creation of a new function using the request_function tool. - -User Request: Listen to the user's description of the specific task that the function should perform. - -Function Name: a. Derived from the task description, formulate a concise and descriptive function name. b. Aim for clarity and specificity to convey the function's purpose effectively. - -Function Description: a. Write a clear and precise description of the function's expected behavior. b. Include details about what the function will accomplish and any side effects. c. (Emphasize) Ensure that the description explicitly communicates the function's intended outcome to avoid ambiguity. - -Input Arguments JSON Schema: a. Based on the requirements of the task, define the JSON schema for the input arguments. b. The schema should be comprehensive and must specify the types, required fields, and constraints for each input argument. c. Ensure that the schema aligns with the user's requirements and the function's intended behavior. - -Validation: Cross-check the name, description, and JSON schema against the user's requirements to confirm accuracy and completeness. - -Execution: Utilize the request_function tool with the following inputs: - -name: [Function Name] -descriptions: [Function Description] -input_argument_json_schema: [Input Arguments JSON Schema] -Feedback Loop: Promptly present the newly created function specifications to the user for any feedback or necessary revisions. - -Iterate: Make adjustments as requested by the user, refining the function name, description, and input argument schema until it meets the user's satisfaction. - -Finalize: Once the user gives approval, consider the function creation process complete. - -Note: Remember to prioritize user requirements and emphasize clear communication in the function description, as highlighted by the user.""", - "name": "tool_creator", -} +with open("tool_maker/tool_creator_metadata.json", "r") as file: + assistant_package = json.load(file) def tool_from_function_schema(schema): From 633af7fdecd9bb0347570e302d2ff32c55cd1a17 Mon Sep 17 00:00:00 2001 From: dejecj <46545517+dejecj@users.noreply.github.com> Date: Fri, 10 Nov 2023 18:27:15 -0500 Subject: [PATCH 020/141] Fix typo in Theoretical Foundation section --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index a822c09..6efb9f9 100644 --- a/README.md +++ b/README.md @@ -14,7 +14,7 @@ The HAAS is designed to be a self-expanding system where a core set of agents, g ## Theoretical Foundation -The AAHS is predicated on the notion that autonomous agents require a robust ethical and operational framework to make decisions that align with human values and organizational goals. This is rooted in the understanding that AI, much like humans, cannot operate effectively without a set of guiding principles or a moral compass. The HAAS addresses this by establishing a multi-tiered system where each layer of agents operates within a defined ethical and functional scope, ensuring decisions are made with consideration to morality, ethics, and utility. +The HAAS is predicated on the notion that autonomous agents require a robust ethical and operational framework to make decisions that align with human values and organizational goals. This is rooted in the understanding that AI, much like humans, cannot operate effectively without a set of guiding principles or a moral compass. The HAAS addresses this by establishing a multi-tiered system where each layer of agents operates within a defined ethical and functional scope, ensuring decisions are made with consideration to morality, ethics, and utility. ## System Architecture From cfc7316a274c45c126a6da4326bec6a920171b3b Mon Sep 17 00:00:00 2001 From: FireMMDC Date: Fri, 10 Nov 2023 19:25:31 -0500 Subject: [PATCH 021/141] Adding logic to prevent the creation of duplicate agents, and to prevent existing files from being reuploaded, still needs tool integration and a process to deal with modifcation to files that have already been uploaded for retrievial --- agent_builder/create.py | 93 +++++++++++++++++++++++++++++++++-------- 1 file changed, 76 insertions(+), 17 deletions(-) diff --git a/agent_builder/create.py b/agent_builder/create.py index 1789c5e..793b6a0 100644 --- a/agent_builder/create.py +++ b/agent_builder/create.py @@ -14,9 +14,26 @@ if not os.path.exists(agents_path) or not os.path.isdir(agents_path) or not os.listdir(agents_path): raise ValueError('The "agents" folder is missing, not a directory, or empty.') +existing_assitstants = {} + +for assistant in client.beta.assistants.list(limit=100): + existing_assitstants[assistant.name] = assistant + + # Iterate over each folder inside the 'agents' folder for agent_name in os.listdir(agents_path): agent_folder = os.path.join(agents_path, agent_name) + + existing_files = {} + requested_files = [] + existing_agent = {} + if agent_name in existing_assitstants: + existing_agent = existing_assitstants[agent_name] + for file_id in existing_agent.file_ids: + existing_file = client.files.retrieve(file_id=file_id) + existing_files[existing_file.filename] = existing_file + + if os.path.isdir(agent_folder): # Read contents from the 'instructions.md' file instructions = '' @@ -30,11 +47,16 @@ files_folder = os.path.join(agent_folder, 'files') if os.path.isdir(files_folder): for filename in os.listdir(files_folder): - file_path = os.path.join(files_folder, filename) - with open(file_path, 'rb') as file_data: - # Upload each file to OpenAI - file_object = client.files.create(file=file_data, purpose='assistants') - files.append({"name": filename, "id": file_object.id}) + requested_files.append(filename) + # Doesn't handle if file has been modified + if filename not in existing_files: + file_path = os.path.join(files_folder, filename) + with open(file_path, 'rb') as file_data: + # Upload each file to OpenAI + file_object = client.files.create(file=file_data, purpose='assistants') + files.append({"name": filename, "id": file_object.id}) + + model = 'gpt-4-1106-preview' print(agent_name) print("") @@ -43,17 +65,54 @@ print("") print(f"Files: {list(map(lambda x: x['name'], files))}") - create_params = { - "name": agent_name, - "instructions": instructions, - "model": 'gpt-4-1106-preview', - "tools": [{'type': 'code_interpreter'}, {'type': 'retrieval'}] - } - - # Only include 'file_ids' if there are files - if files: - create_params['file_ids'] = list(map(lambda x: x['id'], files)) + assistant={} + + if existing_agent: + print(f"{agent_name} already exists... validating properties") + update_model = existing_agent.model != model + update_instructions = existing_agent.instructions != instructions + #need to evaluate tools + + update_params = {} + + requested_files_set = set(requested_files) + existing_files_set = set(existing_files.keys()) + + if update_model: + update_params["model"] = model + if update_instructions: + update_params["instructions"] = instructions + if files or requested_files_set != existing_files_set: + retained_set = existing_files_set.intersection(requested_files_set) + all_file_ids = [] + for key in retained_set: + all_file_ids.append(existing_files[key].id) + all_file_ids += list(map(lambda x: x['id'], files)) + update_params['file_ids'] = all_file_ids + if not any( tool.type == "retrieval" for tool in existing_agent.tools): + update_params['tools'] = existing_agent.tools + update_params['tools'].append({'type': 'retrieval'}) + + if len(update_params) != 0: + print(f"Updating {agent_name}'s { ','.join(update_params.keys()) }") + update_params['assistant_id'] = existing_agent.id + assistant = client.beta.assistants.update(**update_params) + else: + print(f"{agent_name} is up to date") + else: + + create_params = { + "name": agent_name, + "instructions": instructions, + "model": model, + "tools": [{'type': 'code_interpreter'}] + } + + # Only include 'file_ids' if there are files + if files: + create_params['tools'].append({'type': 'retrieval'}) + create_params['file_ids'] = list(map(lambda x: x['id'], files)) - # Create the assistant using the uploaded file IDs if files exist - assistant = client.beta.assistants.create(**create_params) + # Create the assistant using the uploaded file IDs if files exist + assistant = client.beta.assistants.create(**create_params) print("***********************************************") \ No newline at end of file From 36727bed73b36998c6add625924400a859b178f1 Mon Sep 17 00:00:00 2001 From: FireMMDC Date: Fri, 10 Nov 2023 19:25:31 -0500 Subject: [PATCH 022/141] Adding logic to prevent the creation of duplicate agents, and to prevent existing files from being reuploaded, still needs tool integration and a process to deal with modifcation to files that have already been uploaded for retrievial --- agent_builder/create.py | 93 +++++++++++++++++++++++++++++++++-------- 1 file changed, 76 insertions(+), 17 deletions(-) diff --git a/agent_builder/create.py b/agent_builder/create.py index 1789c5e..793b6a0 100644 --- a/agent_builder/create.py +++ b/agent_builder/create.py @@ -14,9 +14,26 @@ if not os.path.exists(agents_path) or not os.path.isdir(agents_path) or not os.listdir(agents_path): raise ValueError('The "agents" folder is missing, not a directory, or empty.') +existing_assitstants = {} + +for assistant in client.beta.assistants.list(limit=100): + existing_assitstants[assistant.name] = assistant + + # Iterate over each folder inside the 'agents' folder for agent_name in os.listdir(agents_path): agent_folder = os.path.join(agents_path, agent_name) + + existing_files = {} + requested_files = [] + existing_agent = {} + if agent_name in existing_assitstants: + existing_agent = existing_assitstants[agent_name] + for file_id in existing_agent.file_ids: + existing_file = client.files.retrieve(file_id=file_id) + existing_files[existing_file.filename] = existing_file + + if os.path.isdir(agent_folder): # Read contents from the 'instructions.md' file instructions = '' @@ -30,11 +47,16 @@ files_folder = os.path.join(agent_folder, 'files') if os.path.isdir(files_folder): for filename in os.listdir(files_folder): - file_path = os.path.join(files_folder, filename) - with open(file_path, 'rb') as file_data: - # Upload each file to OpenAI - file_object = client.files.create(file=file_data, purpose='assistants') - files.append({"name": filename, "id": file_object.id}) + requested_files.append(filename) + # Doesn't handle if file has been modified + if filename not in existing_files: + file_path = os.path.join(files_folder, filename) + with open(file_path, 'rb') as file_data: + # Upload each file to OpenAI + file_object = client.files.create(file=file_data, purpose='assistants') + files.append({"name": filename, "id": file_object.id}) + + model = 'gpt-4-1106-preview' print(agent_name) print("") @@ -43,17 +65,54 @@ print("") print(f"Files: {list(map(lambda x: x['name'], files))}") - create_params = { - "name": agent_name, - "instructions": instructions, - "model": 'gpt-4-1106-preview', - "tools": [{'type': 'code_interpreter'}, {'type': 'retrieval'}] - } - - # Only include 'file_ids' if there are files - if files: - create_params['file_ids'] = list(map(lambda x: x['id'], files)) + assistant={} + + if existing_agent: + print(f"{agent_name} already exists... validating properties") + update_model = existing_agent.model != model + update_instructions = existing_agent.instructions != instructions + #need to evaluate tools + + update_params = {} + + requested_files_set = set(requested_files) + existing_files_set = set(existing_files.keys()) + + if update_model: + update_params["model"] = model + if update_instructions: + update_params["instructions"] = instructions + if files or requested_files_set != existing_files_set: + retained_set = existing_files_set.intersection(requested_files_set) + all_file_ids = [] + for key in retained_set: + all_file_ids.append(existing_files[key].id) + all_file_ids += list(map(lambda x: x['id'], files)) + update_params['file_ids'] = all_file_ids + if not any( tool.type == "retrieval" for tool in existing_agent.tools): + update_params['tools'] = existing_agent.tools + update_params['tools'].append({'type': 'retrieval'}) + + if len(update_params) != 0: + print(f"Updating {agent_name}'s { ','.join(update_params.keys()) }") + update_params['assistant_id'] = existing_agent.id + assistant = client.beta.assistants.update(**update_params) + else: + print(f"{agent_name} is up to date") + else: + + create_params = { + "name": agent_name, + "instructions": instructions, + "model": model, + "tools": [{'type': 'code_interpreter'}] + } + + # Only include 'file_ids' if there are files + if files: + create_params['tools'].append({'type': 'retrieval'}) + create_params['file_ids'] = list(map(lambda x: x['id'], files)) - # Create the assistant using the uploaded file IDs if files exist - assistant = client.beta.assistants.create(**create_params) + # Create the assistant using the uploaded file IDs if files exist + assistant = client.beta.assistants.create(**create_params) print("***********************************************") \ No newline at end of file From 22fbf8e4bc4b2efe4c04cc1e3a7175dbe653957b Mon Sep 17 00:00:00 2001 From: Ikko Eltociear Ashimine Date: Sat, 11 Nov 2023 16:19:35 +0900 Subject: [PATCH 023/141] Update README.md standin -> stand in --- agent_builder/README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/agent_builder/README.md b/agent_builder/README.md index 99ecd11..17d7e20 100644 --- a/agent_builder/README.md +++ b/agent_builder/README.md @@ -51,7 +51,7 @@ The primary method of interaction with the agents is via chat dialog similar to ### USER Input -The USER, in this case, is a standin for the rest of the HAAS swarm. It could be a direct supervisor agent (manager) or something else. Here are some ideas: +The USER, in this case, is a stand in for the rest of the HAAS swarm. It could be a direct supervisor agent (manager) or something else. Here are some ideas: 1. **Supervisor Directives:** We will need to have supervisor or manager agents telling other agents what to do. 2. **Group Chats:** As demonstrated with ChatDev and other "chatroom" style usecases of agents. @@ -59,4 +59,4 @@ The USER, in this case, is a standin for the rest of the HAAS swarm. It could be ### Agent Output -By and large, agent output will probably be consumed by other agents, message queues, and system buses. It is not yet clear how we'll structure this. It could get very noisy very fast. \ No newline at end of file +By and large, agent output will probably be consumed by other agents, message queues, and system buses. It is not yet clear how we'll structure this. It could get very noisy very fast. From c2255e09e28be06ef043fa437493056cd18ce9e3 Mon Sep 17 00:00:00 2001 From: guillermo-delrio Date: Sat, 11 Nov 2023 11:03:35 +0100 Subject: [PATCH 024/141] Implement funcion calling --- agent_functions/README.md | 3 + agent_functions/agents copy.yaml | 8 +++ agent_functions/agents.yaml | 8 +++ agent_functions/connect.py | 114 +++++++++++++++++++++++++++++++ agent_functions/functions.md | 21 ++++++ agent_functions/prompts.md | 5 ++ 6 files changed, 159 insertions(+) create mode 100644 agent_functions/README.md create mode 100644 agent_functions/agents copy.yaml create mode 100644 agent_functions/agents.yaml create mode 100644 agent_functions/connect.py create mode 100644 agent_functions/functions.md create mode 100644 agent_functions/prompts.md diff --git a/agent_functions/README.md b/agent_functions/README.md new file mode 100644 index 0000000..fb2be81 --- /dev/null +++ b/agent_functions/README.md @@ -0,0 +1,3 @@ +Provide the agents topology on the agents.yaml file. +After running connect.py, a thread will be created for each agent. +Internally queues handle the messages that go to different agents. \ No newline at end of file diff --git a/agent_functions/agents copy.yaml b/agent_functions/agents copy.yaml new file mode 100644 index 0000000..ebc18fa --- /dev/null +++ b/agent_functions/agents copy.yaml @@ -0,0 +1,8 @@ +- name: "Upper case" + id: "asst_OswAEoT5NnteGEzNIP9UOa7S" + talksTo: ["Lower case", "Random case"] +- name: "Lower case" + id: "asst_utnfDavVtGWkjFD3BEqXGR2O" + talksTo: ["Random case"] +- name: "Random case" + id: "asst_wFxS2rVmN9rjKDy7znaeNo3T" \ No newline at end of file diff --git a/agent_functions/agents.yaml b/agent_functions/agents.yaml new file mode 100644 index 0000000..ebc18fa --- /dev/null +++ b/agent_functions/agents.yaml @@ -0,0 +1,8 @@ +- name: "Upper case" + id: "asst_OswAEoT5NnteGEzNIP9UOa7S" + talksTo: ["Lower case", "Random case"] +- name: "Lower case" + id: "asst_utnfDavVtGWkjFD3BEqXGR2O" + talksTo: ["Random case"] +- name: "Random case" + id: "asst_wFxS2rVmN9rjKDy7znaeNo3T" \ No newline at end of file diff --git a/agent_functions/connect.py b/agent_functions/connect.py new file mode 100644 index 0000000..9701c85 --- /dev/null +++ b/agent_functions/connect.py @@ -0,0 +1,114 @@ +import yaml +from openai import OpenAI +import os +import dotenv +dotenv.load_dotenv() +import queue as queueModule +import time +import threading +import json + +agents_path = 'agents' +api_key = os.getenv('OPENAI_API_KEY') +if api_key is None: + raise ValueError('The OPENAI_API_KEY environment variable is not set.') + +client = OpenAI(api_key=api_key) + +# Get the directory name of the current script +script_dir = os.path.dirname(os.path.abspath(__file__)) + +# Construct the absolute path to the agents.yaml file +yaml_file_path = os.path.join(script_dir, 'agents.yaml') + +with open(yaml_file_path, 'r') as stream: + agents = yaml.safe_load(stream) + +messageQueue = [] + +def handleThreadForAgent(agent): + messages = [] + + print(f"[{agent['name']}] Id: {agent['id']}") + if 'talksTo' in agent: + print(f"[{agent['name']}] Talks to: {agent['talksTo']}") + + thread = client.beta.threads.create() + print(f"[{agent['name']}] Thread {thread.id}") + + print("") + queue = queues[agent['name']] + waitingForMessages = True + while True: + if waitingForMessages: + message = queue.get(block=True) + if message is not None: + waitingForMessages = False + # print(f"[{agent['name']}] Recieved: {message}") + messages.append(message) + client.beta.threads.messages.create( + thread_id=thread.id, + content=message, + role='user' + ) + + run = client.beta.threads.runs.create( + thread_id=thread.id, + assistant_id=agent['id'] + ) + + else: + run = client.beta.threads.runs.retrieve(thread_id=thread.id, run_id=run.id) + if run.status == 'completed': + waitingForMessages = True + + message_list = client.beta.threads.messages.list( + thread_id=thread.id + ) + retrievedMessages = [] + for datum in message_list.data: + for content in datum.content: + retrievedMessages.append(content.text.value) + retrievedMessages.reverse() + + i = len(messages) + while i < len(retrievedMessages): + retrievedMessage=retrievedMessages[i] + messages.append(retrievedMessage) + print(f"[{agent['name']}] Message: {retrievedMessage}") + # if 'talksTo' in agent: + # for downstreamAgent in agent['talksTo']: + # print(f"[{agent['name']}] Sending message to {downstreamAgent}") + # queues[downstreamAgent].put(retrievedMessage) + i+=1 + elif run.status == 'requires_action': + outputs = [] + for action in run.required_action.submit_tool_outputs.tool_calls: + function_name = action.function.name + arguments = json.loads(action.function.arguments) + if function_name == 'sendMessage': + if arguments['recipient'] in agent['talksTo']: + print(f"[{agent['name']}]->[{arguments['recipient']}] {arguments['message']}") + queues[arguments['recipient']].put(arguments['message']) + outputs.append({ + "tool_call_id": action.id, + "output": "Message sent" + }) + else: + print(f"[{agent['name']}] ERROR unkown recipient {arguments['recipient']}") + else: + print(f"[{agent['name']}] ERROR unkown function {function_name}") + client.beta.threads.runs.submit_tool_outputs( + thread_id=thread.id, + run_id=run.id, + tool_outputs=outputs + ) + time.sleep(1) + +queues = {} + +for agent in agents: + queues[agent['name']] = queueModule.Queue() + threading.Thread(target=handleThreadForAgent, args=(agent,)).start() + +queues['Upper case'].put("aaaaa") \ No newline at end of file diff --git a/agent_functions/functions.md b/agent_functions/functions.md new file mode 100644 index 0000000..95083e1 --- /dev/null +++ b/agent_functions/functions.md @@ -0,0 +1,21 @@ +# Send message +{ + "name": "sendMessage", + "description": "Send a message to another agent", + "parameters": { + "type": "object", + "properties": { + "recipient": { + "type": "string", + "description": "Agent name to send the message to" + }, + "message": { + "type": "string", + "description": "Message to send" + } + }, + "required": [ + "recipient", "message" + ] + } +} diff --git a/agent_functions/prompts.md b/agent_functions/prompts.md new file mode 100644 index 0000000..711ff7d --- /dev/null +++ b/agent_functions/prompts.md @@ -0,0 +1,5 @@ +# Uppercase original +Respond with the same message you are given, but all in upper case. Do not try to answer the user in any way, simply return the upper case message you receive. + +# Uppercase new +Take the input message and convert it to uppercase. Using the function 'sendMessage' send it to the agent named 'Lower case' \ No newline at end of file From d285d3445464a1de70cdc9e52b0661b20f03d891 Mon Sep 17 00:00:00 2001 From: guillermo-delrio Date: Sat, 11 Nov 2023 11:03:35 +0100 Subject: [PATCH 025/141] Implement funcion calling --- agent_functions/README.md | 3 + agent_functions/agents copy.yaml | 8 +++ agent_functions/agents.yaml | 8 +++ agent_functions/connect.py | 114 +++++++++++++++++++++++++++++++ agent_functions/functions.md | 21 ++++++ agent_functions/prompts.md | 5 ++ 6 files changed, 159 insertions(+) create mode 100644 agent_functions/README.md create mode 100644 agent_functions/agents copy.yaml create mode 100644 agent_functions/agents.yaml create mode 100644 agent_functions/connect.py create mode 100644 agent_functions/functions.md create mode 100644 agent_functions/prompts.md diff --git a/agent_functions/README.md b/agent_functions/README.md new file mode 100644 index 0000000..fb2be81 --- /dev/null +++ b/agent_functions/README.md @@ -0,0 +1,3 @@ +Provide the agents topology on the agents.yaml file. +After running connect.py, a thread will be created for each agent. +Internally queues handle the messages that go to different agents. \ No newline at end of file diff --git a/agent_functions/agents copy.yaml b/agent_functions/agents copy.yaml new file mode 100644 index 0000000..ebc18fa --- /dev/null +++ b/agent_functions/agents copy.yaml @@ -0,0 +1,8 @@ +- name: "Upper case" + id: "asst_OswAEoT5NnteGEzNIP9UOa7S" + talksTo: ["Lower case", "Random case"] +- name: "Lower case" + id: "asst_utnfDavVtGWkjFD3BEqXGR2O" + talksTo: ["Random case"] +- name: "Random case" + id: "asst_wFxS2rVmN9rjKDy7znaeNo3T" \ No newline at end of file diff --git a/agent_functions/agents.yaml b/agent_functions/agents.yaml new file mode 100644 index 0000000..ebc18fa --- /dev/null +++ b/agent_functions/agents.yaml @@ -0,0 +1,8 @@ +- name: "Upper case" + id: "asst_OswAEoT5NnteGEzNIP9UOa7S" + talksTo: ["Lower case", "Random case"] +- name: "Lower case" + id: "asst_utnfDavVtGWkjFD3BEqXGR2O" + talksTo: ["Random case"] +- name: "Random case" + id: "asst_wFxS2rVmN9rjKDy7znaeNo3T" \ No newline at end of file diff --git a/agent_functions/connect.py b/agent_functions/connect.py new file mode 100644 index 0000000..9701c85 --- /dev/null +++ b/agent_functions/connect.py @@ -0,0 +1,114 @@ +import yaml +from openai import OpenAI +import os +import dotenv +dotenv.load_dotenv() +import queue as queueModule +import time +import threading +import json + +agents_path = 'agents' +api_key = os.getenv('OPENAI_API_KEY') +if api_key is None: + raise ValueError('The OPENAI_API_KEY environment variable is not set.') + +client = OpenAI(api_key=api_key) + +# Get the directory name of the current script +script_dir = os.path.dirname(os.path.abspath(__file__)) + +# Construct the absolute path to the agents.yaml file +yaml_file_path = os.path.join(script_dir, 'agents.yaml') + +with open(yaml_file_path, 'r') as stream: + agents = yaml.safe_load(stream) + +messageQueue = [] + +def handleThreadForAgent(agent): + messages = [] + + print(f"[{agent['name']}] Id: {agent['id']}") + if 'talksTo' in agent: + print(f"[{agent['name']}] Talks to: {agent['talksTo']}") + + thread = client.beta.threads.create() + print(f"[{agent['name']}] Thread {thread.id}") + + print("") + queue = queues[agent['name']] + waitingForMessages = True + while True: + if waitingForMessages: + message = queue.get(block=True) + if message is not None: + waitingForMessages = False + # print(f"[{agent['name']}] Recieved: {message}") + messages.append(message) + client.beta.threads.messages.create( + thread_id=thread.id, + content=message, + role='user' + ) + + run = client.beta.threads.runs.create( + thread_id=thread.id, + assistant_id=agent['id'] + ) + + else: + run = client.beta.threads.runs.retrieve(thread_id=thread.id, run_id=run.id) + if run.status == 'completed': + waitingForMessages = True + + message_list = client.beta.threads.messages.list( + thread_id=thread.id + ) + retrievedMessages = [] + for datum in message_list.data: + for content in datum.content: + retrievedMessages.append(content.text.value) + retrievedMessages.reverse() + + i = len(messages) + while i < len(retrievedMessages): + retrievedMessage=retrievedMessages[i] + messages.append(retrievedMessage) + print(f"[{agent['name']}] Message: {retrievedMessage}") + # if 'talksTo' in agent: + # for downstreamAgent in agent['talksTo']: + # print(f"[{agent['name']}] Sending message to {downstreamAgent}") + # queues[downstreamAgent].put(retrievedMessage) + i+=1 + elif run.status == 'requires_action': + outputs = [] + for action in run.required_action.submit_tool_outputs.tool_calls: + function_name = action.function.name + arguments = json.loads(action.function.arguments) + if function_name == 'sendMessage': + if arguments['recipient'] in agent['talksTo']: + print(f"[{agent['name']}]->[{arguments['recipient']}] {arguments['message']}") + queues[arguments['recipient']].put(arguments['message']) + outputs.append({ + "tool_call_id": action.id, + "output": "Message sent" + }) + else: + print(f"[{agent['name']}] ERROR unkown recipient {arguments['recipient']}") + else: + print(f"[{agent['name']}] ERROR unkown function {function_name}") + client.beta.threads.runs.submit_tool_outputs( + thread_id=thread.id, + run_id=run.id, + tool_outputs=outputs + ) + time.sleep(1) + +queues = {} + +for agent in agents: + queues[agent['name']] = queueModule.Queue() + threading.Thread(target=handleThreadForAgent, args=(agent,)).start() + +queues['Upper case'].put("aaaaa") \ No newline at end of file diff --git a/agent_functions/functions.md b/agent_functions/functions.md new file mode 100644 index 0000000..95083e1 --- /dev/null +++ b/agent_functions/functions.md @@ -0,0 +1,21 @@ +# Send message +{ + "name": "sendMessage", + "description": "Send a message to another agent", + "parameters": { + "type": "object", + "properties": { + "recipient": { + "type": "string", + "description": "Agent name to send the message to" + }, + "message": { + "type": "string", + "description": "Message to send" + } + }, + "required": [ + "recipient", "message" + ] + } +} diff --git a/agent_functions/prompts.md b/agent_functions/prompts.md new file mode 100644 index 0000000..711ff7d --- /dev/null +++ b/agent_functions/prompts.md @@ -0,0 +1,5 @@ +# Uppercase original +Respond with the same message you are given, but all in upper case. Do not try to answer the user in any way, simply return the upper case message you receive. + +# Uppercase new +Take the input message and convert it to uppercase. Using the function 'sendMessage' send it to the agent named 'Lower case' \ No newline at end of file From 28efa48a6a2dd5b1877ebf857658ed1ef9e449d2 Mon Sep 17 00:00:00 2001 From: guillermo-delrio Date: Sat, 11 Nov 2023 11:36:49 +0100 Subject: [PATCH 026/141] Debug --- agent_connector/agents.yaml | 6 +++--- agent_connector/connect.py | 2 +- agent_functions/README.md | 6 +++--- agent_functions/agents copy.yaml | 8 -------- agent_functions/agents.yaml | 6 +++--- agent_functions/connect.py | 4 ++-- agent_functions/prompts.md | 29 +++++++++++++++++++++++++---- 7 files changed, 37 insertions(+), 24 deletions(-) delete mode 100644 agent_functions/agents copy.yaml diff --git a/agent_connector/agents.yaml b/agent_connector/agents.yaml index ebc18fa..d2bf594 100644 --- a/agent_connector/agents.yaml +++ b/agent_connector/agents.yaml @@ -1,7 +1,7 @@ -- name: "Upper case" +- name: "Uppercase" id: "asst_OswAEoT5NnteGEzNIP9UOa7S" - talksTo: ["Lower case", "Random case"] -- name: "Lower case" + talksTo: ["Lowercase", "Randomcase"] +- name: "Lowercase" id: "asst_utnfDavVtGWkjFD3BEqXGR2O" talksTo: ["Random case"] - name: "Random case" diff --git a/agent_connector/connect.py b/agent_connector/connect.py index e7df655..c8ad75e 100644 --- a/agent_connector/connect.py +++ b/agent_connector/connect.py @@ -89,4 +89,4 @@ def handleThreadForAgent(agent): queues[agent['name']] = queueModule.Queue() threading.Thread(target=handleThreadForAgent, args=(agent,)).start() -queues['Upper case'].put("aaaaa") \ No newline at end of file +queues['Uppercase'].put("aaaaa") \ No newline at end of file diff --git a/agent_functions/README.md b/agent_functions/README.md index fb2be81..fec8df6 100644 --- a/agent_functions/README.md +++ b/agent_functions/README.md @@ -1,3 +1,3 @@ -Provide the agents topology on the agents.yaml file. -After running connect.py, a thread will be created for each agent. -Internally queues handle the messages that go to different agents. \ No newline at end of file +Agents may respond with messages that are not meant to be propagated. +To solva that issue the topology is kept static but agents are given the option to send the messages to their contacts only if they see fit. +This is implemented via function calling. \ No newline at end of file diff --git a/agent_functions/agents copy.yaml b/agent_functions/agents copy.yaml deleted file mode 100644 index ebc18fa..0000000 --- a/agent_functions/agents copy.yaml +++ /dev/null @@ -1,8 +0,0 @@ -- name: "Upper case" - id: "asst_OswAEoT5NnteGEzNIP9UOa7S" - talksTo: ["Lower case", "Random case"] -- name: "Lower case" - id: "asst_utnfDavVtGWkjFD3BEqXGR2O" - talksTo: ["Random case"] -- name: "Random case" - id: "asst_wFxS2rVmN9rjKDy7znaeNo3T" \ No newline at end of file diff --git a/agent_functions/agents.yaml b/agent_functions/agents.yaml index ebc18fa..a10c55c 100644 --- a/agent_functions/agents.yaml +++ b/agent_functions/agents.yaml @@ -1,7 +1,7 @@ -- name: "Upper case" +- name: "Uppercase" id: "asst_OswAEoT5NnteGEzNIP9UOa7S" - talksTo: ["Lower case", "Random case"] -- name: "Lower case" + talksTo: ["Lowercase", "Random case"] +- name: "Lowercase" id: "asst_utnfDavVtGWkjFD3BEqXGR2O" talksTo: ["Random case"] - name: "Random case" diff --git a/agent_functions/connect.py b/agent_functions/connect.py index 9701c85..8c2f5ba 100644 --- a/agent_functions/connect.py +++ b/agent_functions/connect.py @@ -87,7 +87,7 @@ def handleThreadForAgent(agent): function_name = action.function.name arguments = json.loads(action.function.arguments) if function_name == 'sendMessage': - if arguments['recipient'] in agent['talksTo']: + if ('talksTo' in agent) and (arguments['recipient'] in agent['talksTo']): print(f"[{agent['name']}]->[{arguments['recipient']}] {arguments['message']}") queues[arguments['recipient']].put(arguments['message']) outputs.append({ @@ -111,4 +111,4 @@ def handleThreadForAgent(agent): queues[agent['name']] = queueModule.Queue() threading.Thread(target=handleThreadForAgent, args=(agent,)).start() -queues['Upper case'].put("aaaaa") \ No newline at end of file +queues['Uppercase'].put("aaaaa") \ No newline at end of file diff --git a/agent_functions/prompts.md b/agent_functions/prompts.md index 711ff7d..596b636 100644 --- a/agent_functions/prompts.md +++ b/agent_functions/prompts.md @@ -1,5 +1,26 @@ -# Uppercase original -Respond with the same message you are given, but all in upper case. Do not try to answer the user in any way, simply return the upper case message you receive. +# Uppercase +MISSION +Take the input message and convert it to uppercase. -# Uppercase new -Take the input message and convert it to uppercase. Using the function 'sendMessage' send it to the agent named 'Lower case' \ No newline at end of file +INSTRUCTIONS +- Complete the task in your mission. +- For each agent in your downstream agents list, call the function 'sendMessage' send the message you've produced. Set the 'recipient' as the name of the downstream agent. +- Downstream agents: ['Lowercase', 'Random case'] + +# Lowercase +MISSION +Take the input message and convert it to lowercase. + +INSTRUCTIONS +- Complete the task in your mission. +- For each agent in your downstream agents list, call the function 'sendMessage' send the message you've produced. Set the 'recipient' as the name of the downstream agent. +- Downstream agents: ['Random case'] + +# Random case +MISSION +Take the input message and convert it to random case. + +INSTRUCTIONS +- Complete the task in your mission. +- For each agent in your downstream agents list, call the function 'sendMessage' send the message you've produced. Set the 'recipient' as the name of the downstream agent. +- Downstream agents: [] \ No newline at end of file From 3055cd81c7c997dec15622756ed39a421f24d5d6 Mon Sep 17 00:00:00 2001 From: guillermo-delrio Date: Sat, 11 Nov 2023 11:36:49 +0100 Subject: [PATCH 027/141] Debug --- agent_connector/agents.yaml | 6 +++--- agent_connector/connect.py | 2 +- agent_functions/README.md | 6 +++--- agent_functions/agents copy.yaml | 8 -------- agent_functions/agents.yaml | 6 +++--- agent_functions/connect.py | 4 ++-- agent_functions/prompts.md | 29 +++++++++++++++++++++++++---- 7 files changed, 37 insertions(+), 24 deletions(-) delete mode 100644 agent_functions/agents copy.yaml diff --git a/agent_connector/agents.yaml b/agent_connector/agents.yaml index ebc18fa..d2bf594 100644 --- a/agent_connector/agents.yaml +++ b/agent_connector/agents.yaml @@ -1,7 +1,7 @@ -- name: "Upper case" +- name: "Uppercase" id: "asst_OswAEoT5NnteGEzNIP9UOa7S" - talksTo: ["Lower case", "Random case"] -- name: "Lower case" + talksTo: ["Lowercase", "Randomcase"] +- name: "Lowercase" id: "asst_utnfDavVtGWkjFD3BEqXGR2O" talksTo: ["Random case"] - name: "Random case" diff --git a/agent_connector/connect.py b/agent_connector/connect.py index e7df655..c8ad75e 100644 --- a/agent_connector/connect.py +++ b/agent_connector/connect.py @@ -89,4 +89,4 @@ def handleThreadForAgent(agent): queues[agent['name']] = queueModule.Queue() threading.Thread(target=handleThreadForAgent, args=(agent,)).start() -queues['Upper case'].put("aaaaa") \ No newline at end of file +queues['Uppercase'].put("aaaaa") \ No newline at end of file diff --git a/agent_functions/README.md b/agent_functions/README.md index fb2be81..fec8df6 100644 --- a/agent_functions/README.md +++ b/agent_functions/README.md @@ -1,3 +1,3 @@ -Provide the agents topology on the agents.yaml file. -After running connect.py, a thread will be created for each agent. -Internally queues handle the messages that go to different agents. \ No newline at end of file +Agents may respond with messages that are not meant to be propagated. +To solva that issue the topology is kept static but agents are given the option to send the messages to their contacts only if they see fit. +This is implemented via function calling. \ No newline at end of file diff --git a/agent_functions/agents copy.yaml b/agent_functions/agents copy.yaml deleted file mode 100644 index ebc18fa..0000000 --- a/agent_functions/agents copy.yaml +++ /dev/null @@ -1,8 +0,0 @@ -- name: "Upper case" - id: "asst_OswAEoT5NnteGEzNIP9UOa7S" - talksTo: ["Lower case", "Random case"] -- name: "Lower case" - id: "asst_utnfDavVtGWkjFD3BEqXGR2O" - talksTo: ["Random case"] -- name: "Random case" - id: "asst_wFxS2rVmN9rjKDy7znaeNo3T" \ No newline at end of file diff --git a/agent_functions/agents.yaml b/agent_functions/agents.yaml index ebc18fa..a10c55c 100644 --- a/agent_functions/agents.yaml +++ b/agent_functions/agents.yaml @@ -1,7 +1,7 @@ -- name: "Upper case" +- name: "Uppercase" id: "asst_OswAEoT5NnteGEzNIP9UOa7S" - talksTo: ["Lower case", "Random case"] -- name: "Lower case" + talksTo: ["Lowercase", "Random case"] +- name: "Lowercase" id: "asst_utnfDavVtGWkjFD3BEqXGR2O" talksTo: ["Random case"] - name: "Random case" diff --git a/agent_functions/connect.py b/agent_functions/connect.py index 9701c85..8c2f5ba 100644 --- a/agent_functions/connect.py +++ b/agent_functions/connect.py @@ -87,7 +87,7 @@ def handleThreadForAgent(agent): function_name = action.function.name arguments = json.loads(action.function.arguments) if function_name == 'sendMessage': - if arguments['recipient'] in agent['talksTo']: + if ('talksTo' in agent) and (arguments['recipient'] in agent['talksTo']): print(f"[{agent['name']}]->[{arguments['recipient']}] {arguments['message']}") queues[arguments['recipient']].put(arguments['message']) outputs.append({ @@ -111,4 +111,4 @@ def handleThreadForAgent(agent): queues[agent['name']] = queueModule.Queue() threading.Thread(target=handleThreadForAgent, args=(agent,)).start() -queues['Upper case'].put("aaaaa") \ No newline at end of file +queues['Uppercase'].put("aaaaa") \ No newline at end of file diff --git a/agent_functions/prompts.md b/agent_functions/prompts.md index 711ff7d..596b636 100644 --- a/agent_functions/prompts.md +++ b/agent_functions/prompts.md @@ -1,5 +1,26 @@ -# Uppercase original -Respond with the same message you are given, but all in upper case. Do not try to answer the user in any way, simply return the upper case message you receive. +# Uppercase +MISSION +Take the input message and convert it to uppercase. -# Uppercase new -Take the input message and convert it to uppercase. Using the function 'sendMessage' send it to the agent named 'Lower case' \ No newline at end of file +INSTRUCTIONS +- Complete the task in your mission. +- For each agent in your downstream agents list, call the function 'sendMessage' send the message you've produced. Set the 'recipient' as the name of the downstream agent. +- Downstream agents: ['Lowercase', 'Random case'] + +# Lowercase +MISSION +Take the input message and convert it to lowercase. + +INSTRUCTIONS +- Complete the task in your mission. +- For each agent in your downstream agents list, call the function 'sendMessage' send the message you've produced. Set the 'recipient' as the name of the downstream agent. +- Downstream agents: ['Random case'] + +# Random case +MISSION +Take the input message and convert it to random case. + +INSTRUCTIONS +- Complete the task in your mission. +- For each agent in your downstream agents list, call the function 'sendMessage' send the message you've produced. Set the 'recipient' as the name of the downstream agent. +- Downstream agents: [] \ No newline at end of file From 0d65fffa3688c3e4fffbe3b61d255b1406fd32f2 Mon Sep 17 00:00:00 2001 From: guillermo-delrio Date: Sat, 11 Nov 2023 11:37:07 +0100 Subject: [PATCH 028/141] Improve prompts --- agent_connector/prompts.md | 8 ++++++++ 1 file changed, 8 insertions(+) create mode 100644 agent_connector/prompts.md diff --git a/agent_connector/prompts.md b/agent_connector/prompts.md new file mode 100644 index 0000000..5b3856a --- /dev/null +++ b/agent_connector/prompts.md @@ -0,0 +1,8 @@ +# Uppercase +Respond with the same message you are given, but all in upper case. Do not try to answer the user in any way, simply return the upper case message you receive. + +# Lowercase +Respond with the same message you are given, but all in lowercase. Do not try to answer the user in any way, simply return the upper case message you receive. + +# Random case +Respond with the same message you are given, but all in random case. Do not try to answer the user in any way, simply return the upper case message you receive. \ No newline at end of file From 20ec5b4ce2ec43b80019cf5fe2922ec078c98c19 Mon Sep 17 00:00:00 2001 From: guillermo-delrio Date: Sat, 11 Nov 2023 11:37:07 +0100 Subject: [PATCH 029/141] Improve prompts --- agent_connector/prompts.md | 8 ++++++++ 1 file changed, 8 insertions(+) create mode 100644 agent_connector/prompts.md diff --git a/agent_connector/prompts.md b/agent_connector/prompts.md new file mode 100644 index 0000000..5b3856a --- /dev/null +++ b/agent_connector/prompts.md @@ -0,0 +1,8 @@ +# Uppercase +Respond with the same message you are given, but all in upper case. Do not try to answer the user in any way, simply return the upper case message you receive. + +# Lowercase +Respond with the same message you are given, but all in lowercase. Do not try to answer the user in any way, simply return the upper case message you receive. + +# Random case +Respond with the same message you are given, but all in random case. Do not try to answer the user in any way, simply return the upper case message you receive. \ No newline at end of file From 86f41b5744078e539a5d0ecfc6f886581b00bb14 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Guillermo=20del=20R=C3=ADo?= <40954603+guillermo-delrio@users.noreply.github.com> Date: Sat, 11 Nov 2023 11:42:09 +0100 Subject: [PATCH 030/141] Update README.md --- agent_functions/README.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/agent_functions/README.md b/agent_functions/README.md index fec8df6..2b8ee47 100644 --- a/agent_functions/README.md +++ b/agent_functions/README.md @@ -1,3 +1,4 @@ Agents may respond with messages that are not meant to be propagated. -To solva that issue the topology is kept static but agents are given the option to send the messages to their contacts only if they see fit. -This is implemented via function calling. \ No newline at end of file + +To solve that issue the topology is kept static but agents are given the option to send the messages to their contacts only if they see fit. +This is implemented via function calling. From 5ae9268f319a27cefaf51730e4fe74f083587b22 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Guillermo=20del=20R=C3=ADo?= <40954603+guillermo-delrio@users.noreply.github.com> Date: Sat, 11 Nov 2023 11:42:09 +0100 Subject: [PATCH 031/141] Update README.md --- agent_functions/README.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/agent_functions/README.md b/agent_functions/README.md index fec8df6..2b8ee47 100644 --- a/agent_functions/README.md +++ b/agent_functions/README.md @@ -1,3 +1,4 @@ Agents may respond with messages that are not meant to be propagated. -To solva that issue the topology is kept static but agents are given the option to send the messages to their contacts only if they see fit. -This is implemented via function calling. \ No newline at end of file + +To solve that issue the topology is kept static but agents are given the option to send the messages to their contacts only if they see fit. +This is implemented via function calling. From 3e7b6f2821d82ada0395fb18d77ad5a7213214b5 Mon Sep 17 00:00:00 2001 From: FireMMDC Date: Sat, 11 Nov 2023 14:55:49 -0500 Subject: [PATCH 032/141] Updated existing_assitstants to existing_assistants --- agent_builder/create.py | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/agent_builder/create.py b/agent_builder/create.py index 793b6a0..f842678 100644 --- a/agent_builder/create.py +++ b/agent_builder/create.py @@ -14,10 +14,10 @@ if not os.path.exists(agents_path) or not os.path.isdir(agents_path) or not os.listdir(agents_path): raise ValueError('The "agents" folder is missing, not a directory, or empty.') -existing_assitstants = {} +existing_assistants = {} for assistant in client.beta.assistants.list(limit=100): - existing_assitstants[assistant.name] = assistant + existing_assistants[assistant.name] = assistant # Iterate over each folder inside the 'agents' folder @@ -27,8 +27,8 @@ existing_files = {} requested_files = [] existing_agent = {} - if agent_name in existing_assitstants: - existing_agent = existing_assitstants[agent_name] + if agent_name in existing_assistants: + existing_agent = existing_assistants[agent_name] for file_id in existing_agent.file_ids: existing_file = client.files.retrieve(file_id=file_id) existing_files[existing_file.filename] = existing_file From cadb7e69cd8eee63786eb5be9058441f866109a0 Mon Sep 17 00:00:00 2001 From: FireMMDC Date: Sat, 11 Nov 2023 14:55:49 -0500 Subject: [PATCH 033/141] Updated existing_assitstants to existing_assistants --- agent_builder/create.py | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/agent_builder/create.py b/agent_builder/create.py index 793b6a0..f842678 100644 --- a/agent_builder/create.py +++ b/agent_builder/create.py @@ -14,10 +14,10 @@ if not os.path.exists(agents_path) or not os.path.isdir(agents_path) or not os.listdir(agents_path): raise ValueError('The "agents" folder is missing, not a directory, or empty.') -existing_assitstants = {} +existing_assistants = {} for assistant in client.beta.assistants.list(limit=100): - existing_assitstants[assistant.name] = assistant + existing_assistants[assistant.name] = assistant # Iterate over each folder inside the 'agents' folder @@ -27,8 +27,8 @@ existing_files = {} requested_files = [] existing_agent = {} - if agent_name in existing_assitstants: - existing_agent = existing_assitstants[agent_name] + if agent_name in existing_assistants: + existing_agent = existing_assistants[agent_name] for file_id in existing_agent.file_ids: existing_file = client.files.retrieve(file_id=file_id) existing_files[existing_file.filename] = existing_file From 25ddb6a9155758ef51dd3546b016159043a98579 Mon Sep 17 00:00:00 2001 From: guillermo-delrio Date: Sun, 12 Nov 2023 00:35:20 +0100 Subject: [PATCH 034/141] Build basic boss-worker topology. Still very buggy --- agent_functions/agents.yaml | 27 +- agent_functions/connect.py | 233 +++++++++++++----- .../examples/text-manipulation/agents.yaml | 8 + .../examples/text-manipulation/prompts.md | 26 ++ agent_functions/functions.md | 66 +++++ agent_functions/prompts.md | 48 +++- 6 files changed, 326 insertions(+), 82 deletions(-) create mode 100644 agent_functions/examples/text-manipulation/agents.yaml create mode 100644 agent_functions/examples/text-manipulation/prompts.md diff --git a/agent_functions/agents.yaml b/agent_functions/agents.yaml index a10c55c..09d8ac1 100644 --- a/agent_functions/agents.yaml +++ b/agent_functions/agents.yaml @@ -1,8 +1,19 @@ -- name: "Uppercase" - id: "asst_OswAEoT5NnteGEzNIP9UOa7S" - talksTo: ["Lowercase", "Random case"] -- name: "Lowercase" - id: "asst_utnfDavVtGWkjFD3BEqXGR2O" - talksTo: ["Random case"] -- name: "Random case" - id: "asst_wFxS2rVmN9rjKDy7znaeNo3T" \ No newline at end of file +- name: "Boss" + id: "asst_xWMYPN4yIJfrhD7S9AzLI0TO" + talksTo: ["USER", "Worker 1", "Worker 2", "Worker 3"] + functions: ["assignTask"] +- name: "Worker 1" + id: "asst_akckTEWToGurRT3Zcp33Ahe8" + talksTo: ["Boss", "Worker 2", "Worker 3"] + channels: ["Worker"] + functions: ["resolveTask", "broadcast"] +- name: "Worker 2" + id: "asst_3r8hSgGMrjU2TVp9wqlkzsfx" + talksTo: ["Boss", "Worker 1", "Worker 3"] + channels: ["Worker"] + functions: ["resolveTask", "broadcast"] +- name: "Worker 3" + id: "asst_9u163B8jxCbJzGfBVFc2mMva" + talksTo: ["Boss", "Worker 1", "Worker 2"] + channels: ["Worker"] + functions: ["resolveTask", "broadcast"] \ No newline at end of file diff --git a/agent_functions/connect.py b/agent_functions/connect.py index 8c2f5ba..f3bd9a4 100644 --- a/agent_functions/connect.py +++ b/agent_functions/connect.py @@ -24,8 +24,80 @@ with open(yaml_file_path, 'r') as stream: agents = yaml.safe_load(stream) -messageQueue = [] +queues = {} +channels = [] + +def processSendMessage(agent, outputs, action, arguments): + if ('talksTo' in agent) and (arguments['recipient'] in agent['talksTo']): + if arguments['recipient'] == "USER": + print(f"[{agent['name']}] Result: {arguments['message']}") + os._exit(0) + else: + print(f"[{agent['name']}]->[{arguments['recipient']}] {arguments['message']}") + + queues[arguments['recipient']].put(arguments['message']) + outputs.append({ + "tool_call_id": action.id, + "output": "Message sent" + }) + else: + print(f"[{agent['name']}] ERROR unkown recipient {arguments['recipient']}") + +def broadcast(agent, outputs, action, arguments): + if ('channels' in agent) and (arguments['channel'] in agent['channels']): + for channel in channels: + if channel['name'] == arguments['channel']: + print(f"[{agent['name']}]->({arguments['channel']}) {arguments['message']}") + for recipient in channel['agents']: + if recipient != agent['name']: # Do not queue the message on the agent that sent in + queues[recipient].put(arguments['message']) + outputs.append({ + "tool_call_id": action.id, + "output": "Message sent" + }) + else: + print(f"[{agent['name']}] ERROR unkown channel {arguments['channel']}") + outputs.append({ + "tool_call_id": action.id, + "output": "Unkown channel" + }) + +def assignTask(agent, arguments, actionId, threadId, runId): + print(f"[{agent['name']}]>[ASSIGN TASK {actionId}]>[{arguments['assignee']}] {arguments['task']}") + pendingActions.append({ + "id": actionId, + "agent": agent['name'], + "threadId": threadId, + "runId": runId, + "outputs": {}}) + + agentsWaitingForActions.append(agent['name']) + queues[arguments['assignee']].put(f"Task id: {actionId}\n{arguments['task']}") + +def resolveTask(agent, workerOutputs, action, arguments): + print(f"{arguments}") + print(f"[{agent['name']}]>[RESOLVE TASK {arguments['id']}] {arguments['result']}") + os._exit(0) + # outputs = [] + # outputs.append({ + # "tool_call_id": arguments['id'], + # "output": arguments['result'] + # }) + # for pendingAction in pendingActions: + # if pendingAction['id'] == arguments['id']: + # client.beta.threads.runs.submit_tool_outputs( + # thread_id=pendingAction['threadId'], + # run_id=pendingAction['runId'], + # tool_outputs=outputs + # ) + + # workerOutputs.append({ + # "tool_call_id": action.id, + # "output": "Task resolved" + # }) + + def handleThreadForAgent(agent): messages = [] @@ -40,75 +112,112 @@ def handleThreadForAgent(agent): queue = queues[agent['name']] waitingForMessages = True while True: - if waitingForMessages: - message = queue.get(block=True) - if message is not None: - waitingForMessages = False - # print(f"[{agent['name']}] Recieved: {message}") - messages.append(message) - client.beta.threads.messages.create( - thread_id=thread.id, - content=message, - role='user' - ) - - run = client.beta.threads.runs.create( - thread_id=thread.id, - assistant_id=agent['id'] - ) + if agent['name'] not in agentsWaitingForActions: + if waitingForMessages: + message = queue.get(block=True) + if message is not None: + waitingForMessages = False + # print(f"[{agent['name']}] Recieved: {message}") + messages.append(message) + client.beta.threads.messages.create( + thread_id=thread.id, + content=message, + role='user' + ) - else: - run = client.beta.threads.runs.retrieve(thread_id=thread.id, run_id=run.id) - if run.status == 'completed': - waitingForMessages = True + run = client.beta.threads.runs.create( + thread_id=thread.id, + assistant_id=agent['id'] + ) + + else: + run = client.beta.threads.runs.retrieve(thread_id=thread.id, run_id=run.id) + if run.status == 'completed': + waitingForMessages = True + + message_list = client.beta.threads.messages.list( + thread_id=thread.id + ) + retrievedMessages = [] + for datum in message_list.data: + for content in datum.content: + retrievedMessages.append(content.text.value) + retrievedMessages.reverse() - message_list = client.beta.threads.messages.list( - thread_id=thread.id - ) - retrievedMessages = [] - for datum in message_list.data: - for content in datum.content: - retrievedMessages.append(content.text.value) - retrievedMessages.reverse() - - i = len(messages) - while i < len(retrievedMessages): - retrievedMessage=retrievedMessages[i] - messages.append(retrievedMessage) - print(f"[{agent['name']}] Message: {retrievedMessage}") - # if 'talksTo' in agent: - # for downstreamAgent in agent['talksTo']: - # print(f"[{agent['name']}] Sending message to {downstreamAgent}") - # queues[downstreamAgent].put(retrievedMessage) - i+=1 - elif run.status == 'requires_action': - outputs = [] - for action in run.required_action.submit_tool_outputs.tool_calls: - function_name = action.function.name - arguments = json.loads(action.function.arguments) - if function_name == 'sendMessage': - if ('talksTo' in agent) and (arguments['recipient'] in agent['talksTo']): - print(f"[{agent['name']}]->[{arguments['recipient']}] {arguments['message']}") - queues[arguments['recipient']].put(arguments['message']) + i = len(messages) + while i < len(retrievedMessages): + retrievedMessage=retrievedMessages[i] + messages.append(retrievedMessage) + print(f"[{agent['name']}] Message: {retrievedMessage}") + i+=1 + elif run.status == 'requires_action': + outputs = [] + submitOutput=True + for action in run.required_action.submit_tool_outputs.tool_calls: + function_name = action.function.name + arguments = json.loads(action.function.arguments) + if function_name == 'sendMessage': + processSendMessage(agent, outputs, action, arguments) + elif function_name == 'broadcast': + broadcast(agent, outputs, action, arguments) + elif function_name == 'assignTask': + assignTask(agent, arguments, action.id, thread.id, run.id) + submitOutput=False + elif function_name == 'resolveTask': + resolveTask(agent, outputs, action, arguments) + else: + print(f"[{agent['name']}] ERROR unkown function {function_name}") outputs.append({ "tool_call_id": action.id, - "output": "Message sent" - }) - else: - print(f"[{agent['name']}] ERROR unkown recipient {arguments['recipient']}") - else: - print(f"[{agent['name']}] ERROR unkown function {function_name}") - client.beta.threads.runs.submit_tool_outputs( - thread_id=thread.id, - run_id=run.id, - tool_outputs=outputs - ) + "output": "Unkown function" + }) + + if submitOutput: + client.beta.threads.runs.submit_tool_outputs( + thread_id=thread.id, + run_id=run.id, + tool_outputs=outputs + ) time.sleep(1) -queues = {} +def buildChannel(agent): + if 'channels' in agent: + for channel in agent['channels']: + newChannel = True + for existingChannel in channels: + if existingChannel['name'] == channel: + existingChannel['agents'].append(agent['name']) + newChannel = False + if newChannel: + channels.append({"name": channel, "agents": [agent['name']]}) for agent in agents: + # Build private queues queues[agent['name']] = queueModule.Queue() + + # Build channels + buildChannel(agent) + +print(f"Channels: {channels}") + +# agent, threadId, runId, outputs +pendingActions = [] +agentsWaitingForActions = [] +def processPendingActions(): + while True: + for action in pendingActions: + if action['outputs']: # Output already set + client.beta.threads.runs.submit_tool_outputs( + thread_id=action['threadId'], + run_id=action['runId'], + tool_outputs=action['outputs'] + ) + agentsWaitingForActions.remove(action['agent']) + time.sleep(1) + +threading.Thread(target=processPendingActions, args=()).start() + +for agent in agents: threading.Thread(target=handleThreadForAgent, args=(agent,)).start() -queues['Uppercase'].put("aaaaa") \ No newline at end of file +queues['Boss'].put("Explain how clouds are formed in 100 words or less") \ No newline at end of file diff --git a/agent_functions/examples/text-manipulation/agents.yaml b/agent_functions/examples/text-manipulation/agents.yaml new file mode 100644 index 0000000..5ca8deb --- /dev/null +++ b/agent_functions/examples/text-manipulation/agents.yaml @@ -0,0 +1,8 @@ +- name: "Uppercase" + id: "asst_OswAEoT5NnteGEzNIP9UOa7S" + talksTo: ["USER", "Lowercase", "Random case"] +- name: "Lowercase" + id: "asst_utnfDavVtGWkjFD3BEqXGR2O" + talksTo: ["Random case"] +- name: "Random case" + id: "asst_wFxS2rVmN9rjKDy7znaeNo3T" \ No newline at end of file diff --git a/agent_functions/examples/text-manipulation/prompts.md b/agent_functions/examples/text-manipulation/prompts.md new file mode 100644 index 0000000..596b636 --- /dev/null +++ b/agent_functions/examples/text-manipulation/prompts.md @@ -0,0 +1,26 @@ +# Uppercase +MISSION +Take the input message and convert it to uppercase. + +INSTRUCTIONS +- Complete the task in your mission. +- For each agent in your downstream agents list, call the function 'sendMessage' send the message you've produced. Set the 'recipient' as the name of the downstream agent. +- Downstream agents: ['Lowercase', 'Random case'] + +# Lowercase +MISSION +Take the input message and convert it to lowercase. + +INSTRUCTIONS +- Complete the task in your mission. +- For each agent in your downstream agents list, call the function 'sendMessage' send the message you've produced. Set the 'recipient' as the name of the downstream agent. +- Downstream agents: ['Random case'] + +# Random case +MISSION +Take the input message and convert it to random case. + +INSTRUCTIONS +- Complete the task in your mission. +- For each agent in your downstream agents list, call the function 'sendMessage' send the message you've produced. Set the 'recipient' as the name of the downstream agent. +- Downstream agents: [] \ No newline at end of file diff --git a/agent_functions/functions.md b/agent_functions/functions.md index 95083e1..1cb6611 100644 --- a/agent_functions/functions.md +++ b/agent_functions/functions.md @@ -19,3 +19,69 @@ ] } } + +# Broadcast +{ + "name": "broadcast", + "description": "Broadcast a message on a channel", + "parameters": { + "type": "object", + "properties": { + "channel": { + "type": "string", + "description": "Channel name to broadcast the message to" + }, + "message": { + "type": "string", + "description": "Message to broadcast" + } + }, + "required": [ + "channel", "message" + ] + } +} + +# Assign task +{ + "name": "assignTask", + "description": "Assign a task to the worker agents", + "parameters": { + "type": "object", + "properties": { + "assignee": { + "type": "string", + "description": "Name of the agent assigned to this task" + }, + "task": { + "type": "string", + "description": "Description of the task" + } + }, + "required": [ + "description" + ] + } +} + +# Resolve task +{ + "name": "resolveTask", + "description": "Send final task results to the boss agent", + "parameters": { + "type": "object", + "properties": { + "id": { + "type": "string", + "description": "Task id provided when the task was assigned" + }, + "result": { + "type": "string", + "description": "Result of the task" + } + }, + "required": [ + "description" + ] + } +} \ No newline at end of file diff --git a/agent_functions/prompts.md b/agent_functions/prompts.md index 596b636..4fdb77d 100644 --- a/agent_functions/prompts.md +++ b/agent_functions/prompts.md @@ -1,26 +1,50 @@ -# Uppercase +# Boss MISSION -Take the input message and convert it to uppercase. +- You are a boss agent in charge of three worker agents. +- You'll be given a project to work on. Think step by step about how to tackle it. +- Split it in reasonable tasks and send them to a worker agent one at a time. Wait for the answer to the first worker task before sending another task. +- Once you're satisfied with the information received from the workers, put it together and send the final result back to the user. INSTRUCTIONS - Complete the task in your mission. -- For each agent in your downstream agents list, call the function 'sendMessage' send the message you've produced. Set the 'recipient' as the name of the downstream agent. -- Downstream agents: ['Lowercase', 'Random case'] +- To assign a task to the workers call the function 'assignTask'. At the beginning of the message identify yourself. +- Agents: ["USER", "Worker 1", "Worker 2", "Worker 3"] -# Lowercase +# Worker 1 MISSION -Take the input message and convert it to lowercase. +You are "Worker 1", one of three identical worker agents under a boss agent. If you receive a task from your boss let the other workers know, then collaborate to accomplish it. Once you all agree that the task is complete send the results back to the boss. INSTRUCTIONS - Complete the task in your mission. -- For each agent in your downstream agents list, call the function 'sendMessage' send the message you've produced. Set the 'recipient' as the name of the downstream agent. -- Downstream agents: ['Random case'] +- To talk to other worker agents call the function 'broadcast'. At the beginning of the message identify yourself. +- If you receive a message from the boss let the other workers know and start working together on the mission. Make sure to pass the task id provided by the boss. +- If you receive a message from other workers don't reply back unless necessary. Keep the worker channel as free from noise as possible. Share results in the channel to advance the mission, but do not send acknowledgements. +- Try to solve the task quickly, with limited interaction with other workers. +- To send the task results back to the boss call the function 'resolveTask'. Pass the id recieved from the boss when the task was assigned. +- Channels: [{'name': 'Worker', 'agents': ['Worker 1', 'Worker 2', 'Worker 3']}] -# Random case +# Worker 2 MISSION -Take the input message and convert it to random case. +You are "Worker 2", one of three identical worker agents under a boss agent. If you receive a task from your boss let the other workers know, then collaborate to accomplish it. Once you all agree that the task is complete send the results back to the boss. INSTRUCTIONS - Complete the task in your mission. -- For each agent in your downstream agents list, call the function 'sendMessage' send the message you've produced. Set the 'recipient' as the name of the downstream agent. -- Downstream agents: [] \ No newline at end of file +- To talk to other worker agents call the function 'broadcast'. At the beginning of the message identify yourself. +- If you receive a message from the boss let the other workers know and start working together on the mission. +- If you receive a message from other workers don't reply back unless necessary. Keep the worker channel as free from noise as possible. Share results in the channel to advance the mission, but do not send acknowledgements. +- Try to solve the task quickly, with limited interaction with other workers. +- To send the task results back to the boss call the function 'resolveTask'. +- Channels: [{'name': 'Worker', 'agents': ['Worker 1', 'Worker 2', 'Worker 3']}] + +# Worker 3 +MISSION +You are "Worker 3", one of three identical worker agents under a boss agent. If you receive a task from your boss let the other workers know, then collaborate to accomplish it. Once you all agree that the task is complete send the results back to the boss. + +INSTRUCTIONS +- Complete the task in your mission. +- To talk to other worker agents call the function 'broadcast'. At the beginning of the message identify yourself. +- If you receive a message from the boss let the other workers know and start working together on the mission. +- If you receive a message from other workers don't reply back unless necessary. Keep the worker channel as free from noise as possible. Share results in the channel to advance the mission, but do not send acknowledgements. +- Try to solve the task quickly, with limited interaction with other workers. +- To send the task results back to the boss call the function 'resolveTask'. +- Channels: [{'name': 'Worker', 'agents': ['Worker 1', 'Worker 2', 'Worker 3']}] \ No newline at end of file From 6e9a4d9ba06599a5ba80e74adc21f1e622c47e86 Mon Sep 17 00:00:00 2001 From: guillermo-delrio Date: Sun, 12 Nov 2023 00:35:20 +0100 Subject: [PATCH 035/141] Build basic boss-worker topology. Still very buggy --- agent_functions/agents.yaml | 27 +- agent_functions/connect.py | 233 +++++++++++++----- .../examples/text-manipulation/agents.yaml | 8 + .../examples/text-manipulation/prompts.md | 26 ++ agent_functions/functions.md | 66 +++++ agent_functions/prompts.md | 48 +++- 6 files changed, 326 insertions(+), 82 deletions(-) create mode 100644 agent_functions/examples/text-manipulation/agents.yaml create mode 100644 agent_functions/examples/text-manipulation/prompts.md diff --git a/agent_functions/agents.yaml b/agent_functions/agents.yaml index a10c55c..09d8ac1 100644 --- a/agent_functions/agents.yaml +++ b/agent_functions/agents.yaml @@ -1,8 +1,19 @@ -- name: "Uppercase" - id: "asst_OswAEoT5NnteGEzNIP9UOa7S" - talksTo: ["Lowercase", "Random case"] -- name: "Lowercase" - id: "asst_utnfDavVtGWkjFD3BEqXGR2O" - talksTo: ["Random case"] -- name: "Random case" - id: "asst_wFxS2rVmN9rjKDy7znaeNo3T" \ No newline at end of file +- name: "Boss" + id: "asst_xWMYPN4yIJfrhD7S9AzLI0TO" + talksTo: ["USER", "Worker 1", "Worker 2", "Worker 3"] + functions: ["assignTask"] +- name: "Worker 1" + id: "asst_akckTEWToGurRT3Zcp33Ahe8" + talksTo: ["Boss", "Worker 2", "Worker 3"] + channels: ["Worker"] + functions: ["resolveTask", "broadcast"] +- name: "Worker 2" + id: "asst_3r8hSgGMrjU2TVp9wqlkzsfx" + talksTo: ["Boss", "Worker 1", "Worker 3"] + channels: ["Worker"] + functions: ["resolveTask", "broadcast"] +- name: "Worker 3" + id: "asst_9u163B8jxCbJzGfBVFc2mMva" + talksTo: ["Boss", "Worker 1", "Worker 2"] + channels: ["Worker"] + functions: ["resolveTask", "broadcast"] \ No newline at end of file diff --git a/agent_functions/connect.py b/agent_functions/connect.py index 8c2f5ba..f3bd9a4 100644 --- a/agent_functions/connect.py +++ b/agent_functions/connect.py @@ -24,8 +24,80 @@ with open(yaml_file_path, 'r') as stream: agents = yaml.safe_load(stream) -messageQueue = [] +queues = {} +channels = [] + +def processSendMessage(agent, outputs, action, arguments): + if ('talksTo' in agent) and (arguments['recipient'] in agent['talksTo']): + if arguments['recipient'] == "USER": + print(f"[{agent['name']}] Result: {arguments['message']}") + os._exit(0) + else: + print(f"[{agent['name']}]->[{arguments['recipient']}] {arguments['message']}") + + queues[arguments['recipient']].put(arguments['message']) + outputs.append({ + "tool_call_id": action.id, + "output": "Message sent" + }) + else: + print(f"[{agent['name']}] ERROR unkown recipient {arguments['recipient']}") + +def broadcast(agent, outputs, action, arguments): + if ('channels' in agent) and (arguments['channel'] in agent['channels']): + for channel in channels: + if channel['name'] == arguments['channel']: + print(f"[{agent['name']}]->({arguments['channel']}) {arguments['message']}") + for recipient in channel['agents']: + if recipient != agent['name']: # Do not queue the message on the agent that sent in + queues[recipient].put(arguments['message']) + outputs.append({ + "tool_call_id": action.id, + "output": "Message sent" + }) + else: + print(f"[{agent['name']}] ERROR unkown channel {arguments['channel']}") + outputs.append({ + "tool_call_id": action.id, + "output": "Unkown channel" + }) + +def assignTask(agent, arguments, actionId, threadId, runId): + print(f"[{agent['name']}]>[ASSIGN TASK {actionId}]>[{arguments['assignee']}] {arguments['task']}") + pendingActions.append({ + "id": actionId, + "agent": agent['name'], + "threadId": threadId, + "runId": runId, + "outputs": {}}) + + agentsWaitingForActions.append(agent['name']) + queues[arguments['assignee']].put(f"Task id: {actionId}\n{arguments['task']}") + +def resolveTask(agent, workerOutputs, action, arguments): + print(f"{arguments}") + print(f"[{agent['name']}]>[RESOLVE TASK {arguments['id']}] {arguments['result']}") + os._exit(0) + # outputs = [] + # outputs.append({ + # "tool_call_id": arguments['id'], + # "output": arguments['result'] + # }) + # for pendingAction in pendingActions: + # if pendingAction['id'] == arguments['id']: + # client.beta.threads.runs.submit_tool_outputs( + # thread_id=pendingAction['threadId'], + # run_id=pendingAction['runId'], + # tool_outputs=outputs + # ) + + # workerOutputs.append({ + # "tool_call_id": action.id, + # "output": "Task resolved" + # }) + + def handleThreadForAgent(agent): messages = [] @@ -40,75 +112,112 @@ def handleThreadForAgent(agent): queue = queues[agent['name']] waitingForMessages = True while True: - if waitingForMessages: - message = queue.get(block=True) - if message is not None: - waitingForMessages = False - # print(f"[{agent['name']}] Recieved: {message}") - messages.append(message) - client.beta.threads.messages.create( - thread_id=thread.id, - content=message, - role='user' - ) - - run = client.beta.threads.runs.create( - thread_id=thread.id, - assistant_id=agent['id'] - ) + if agent['name'] not in agentsWaitingForActions: + if waitingForMessages: + message = queue.get(block=True) + if message is not None: + waitingForMessages = False + # print(f"[{agent['name']}] Recieved: {message}") + messages.append(message) + client.beta.threads.messages.create( + thread_id=thread.id, + content=message, + role='user' + ) - else: - run = client.beta.threads.runs.retrieve(thread_id=thread.id, run_id=run.id) - if run.status == 'completed': - waitingForMessages = True + run = client.beta.threads.runs.create( + thread_id=thread.id, + assistant_id=agent['id'] + ) + + else: + run = client.beta.threads.runs.retrieve(thread_id=thread.id, run_id=run.id) + if run.status == 'completed': + waitingForMessages = True + + message_list = client.beta.threads.messages.list( + thread_id=thread.id + ) + retrievedMessages = [] + for datum in message_list.data: + for content in datum.content: + retrievedMessages.append(content.text.value) + retrievedMessages.reverse() - message_list = client.beta.threads.messages.list( - thread_id=thread.id - ) - retrievedMessages = [] - for datum in message_list.data: - for content in datum.content: - retrievedMessages.append(content.text.value) - retrievedMessages.reverse() - - i = len(messages) - while i < len(retrievedMessages): - retrievedMessage=retrievedMessages[i] - messages.append(retrievedMessage) - print(f"[{agent['name']}] Message: {retrievedMessage}") - # if 'talksTo' in agent: - # for downstreamAgent in agent['talksTo']: - # print(f"[{agent['name']}] Sending message to {downstreamAgent}") - # queues[downstreamAgent].put(retrievedMessage) - i+=1 - elif run.status == 'requires_action': - outputs = [] - for action in run.required_action.submit_tool_outputs.tool_calls: - function_name = action.function.name - arguments = json.loads(action.function.arguments) - if function_name == 'sendMessage': - if ('talksTo' in agent) and (arguments['recipient'] in agent['talksTo']): - print(f"[{agent['name']}]->[{arguments['recipient']}] {arguments['message']}") - queues[arguments['recipient']].put(arguments['message']) + i = len(messages) + while i < len(retrievedMessages): + retrievedMessage=retrievedMessages[i] + messages.append(retrievedMessage) + print(f"[{agent['name']}] Message: {retrievedMessage}") + i+=1 + elif run.status == 'requires_action': + outputs = [] + submitOutput=True + for action in run.required_action.submit_tool_outputs.tool_calls: + function_name = action.function.name + arguments = json.loads(action.function.arguments) + if function_name == 'sendMessage': + processSendMessage(agent, outputs, action, arguments) + elif function_name == 'broadcast': + broadcast(agent, outputs, action, arguments) + elif function_name == 'assignTask': + assignTask(agent, arguments, action.id, thread.id, run.id) + submitOutput=False + elif function_name == 'resolveTask': + resolveTask(agent, outputs, action, arguments) + else: + print(f"[{agent['name']}] ERROR unkown function {function_name}") outputs.append({ "tool_call_id": action.id, - "output": "Message sent" - }) - else: - print(f"[{agent['name']}] ERROR unkown recipient {arguments['recipient']}") - else: - print(f"[{agent['name']}] ERROR unkown function {function_name}") - client.beta.threads.runs.submit_tool_outputs( - thread_id=thread.id, - run_id=run.id, - tool_outputs=outputs - ) + "output": "Unkown function" + }) + + if submitOutput: + client.beta.threads.runs.submit_tool_outputs( + thread_id=thread.id, + run_id=run.id, + tool_outputs=outputs + ) time.sleep(1) -queues = {} +def buildChannel(agent): + if 'channels' in agent: + for channel in agent['channels']: + newChannel = True + for existingChannel in channels: + if existingChannel['name'] == channel: + existingChannel['agents'].append(agent['name']) + newChannel = False + if newChannel: + channels.append({"name": channel, "agents": [agent['name']]}) for agent in agents: + # Build private queues queues[agent['name']] = queueModule.Queue() + + # Build channels + buildChannel(agent) + +print(f"Channels: {channels}") + +# agent, threadId, runId, outputs +pendingActions = [] +agentsWaitingForActions = [] +def processPendingActions(): + while True: + for action in pendingActions: + if action['outputs']: # Output already set + client.beta.threads.runs.submit_tool_outputs( + thread_id=action['threadId'], + run_id=action['runId'], + tool_outputs=action['outputs'] + ) + agentsWaitingForActions.remove(action['agent']) + time.sleep(1) + +threading.Thread(target=processPendingActions, args=()).start() + +for agent in agents: threading.Thread(target=handleThreadForAgent, args=(agent,)).start() -queues['Uppercase'].put("aaaaa") \ No newline at end of file +queues['Boss'].put("Explain how clouds are formed in 100 words or less") \ No newline at end of file diff --git a/agent_functions/examples/text-manipulation/agents.yaml b/agent_functions/examples/text-manipulation/agents.yaml new file mode 100644 index 0000000..5ca8deb --- /dev/null +++ b/agent_functions/examples/text-manipulation/agents.yaml @@ -0,0 +1,8 @@ +- name: "Uppercase" + id: "asst_OswAEoT5NnteGEzNIP9UOa7S" + talksTo: ["USER", "Lowercase", "Random case"] +- name: "Lowercase" + id: "asst_utnfDavVtGWkjFD3BEqXGR2O" + talksTo: ["Random case"] +- name: "Random case" + id: "asst_wFxS2rVmN9rjKDy7znaeNo3T" \ No newline at end of file diff --git a/agent_functions/examples/text-manipulation/prompts.md b/agent_functions/examples/text-manipulation/prompts.md new file mode 100644 index 0000000..596b636 --- /dev/null +++ b/agent_functions/examples/text-manipulation/prompts.md @@ -0,0 +1,26 @@ +# Uppercase +MISSION +Take the input message and convert it to uppercase. + +INSTRUCTIONS +- Complete the task in your mission. +- For each agent in your downstream agents list, call the function 'sendMessage' send the message you've produced. Set the 'recipient' as the name of the downstream agent. +- Downstream agents: ['Lowercase', 'Random case'] + +# Lowercase +MISSION +Take the input message and convert it to lowercase. + +INSTRUCTIONS +- Complete the task in your mission. +- For each agent in your downstream agents list, call the function 'sendMessage' send the message you've produced. Set the 'recipient' as the name of the downstream agent. +- Downstream agents: ['Random case'] + +# Random case +MISSION +Take the input message and convert it to random case. + +INSTRUCTIONS +- Complete the task in your mission. +- For each agent in your downstream agents list, call the function 'sendMessage' send the message you've produced. Set the 'recipient' as the name of the downstream agent. +- Downstream agents: [] \ No newline at end of file diff --git a/agent_functions/functions.md b/agent_functions/functions.md index 95083e1..1cb6611 100644 --- a/agent_functions/functions.md +++ b/agent_functions/functions.md @@ -19,3 +19,69 @@ ] } } + +# Broadcast +{ + "name": "broadcast", + "description": "Broadcast a message on a channel", + "parameters": { + "type": "object", + "properties": { + "channel": { + "type": "string", + "description": "Channel name to broadcast the message to" + }, + "message": { + "type": "string", + "description": "Message to broadcast" + } + }, + "required": [ + "channel", "message" + ] + } +} + +# Assign task +{ + "name": "assignTask", + "description": "Assign a task to the worker agents", + "parameters": { + "type": "object", + "properties": { + "assignee": { + "type": "string", + "description": "Name of the agent assigned to this task" + }, + "task": { + "type": "string", + "description": "Description of the task" + } + }, + "required": [ + "description" + ] + } +} + +# Resolve task +{ + "name": "resolveTask", + "description": "Send final task results to the boss agent", + "parameters": { + "type": "object", + "properties": { + "id": { + "type": "string", + "description": "Task id provided when the task was assigned" + }, + "result": { + "type": "string", + "description": "Result of the task" + } + }, + "required": [ + "description" + ] + } +} \ No newline at end of file diff --git a/agent_functions/prompts.md b/agent_functions/prompts.md index 596b636..4fdb77d 100644 --- a/agent_functions/prompts.md +++ b/agent_functions/prompts.md @@ -1,26 +1,50 @@ -# Uppercase +# Boss MISSION -Take the input message and convert it to uppercase. +- You are a boss agent in charge of three worker agents. +- You'll be given a project to work on. Think step by step about how to tackle it. +- Split it in reasonable tasks and send them to a worker agent one at a time. Wait for the answer to the first worker task before sending another task. +- Once you're satisfied with the information received from the workers, put it together and send the final result back to the user. INSTRUCTIONS - Complete the task in your mission. -- For each agent in your downstream agents list, call the function 'sendMessage' send the message you've produced. Set the 'recipient' as the name of the downstream agent. -- Downstream agents: ['Lowercase', 'Random case'] +- To assign a task to the workers call the function 'assignTask'. At the beginning of the message identify yourself. +- Agents: ["USER", "Worker 1", "Worker 2", "Worker 3"] -# Lowercase +# Worker 1 MISSION -Take the input message and convert it to lowercase. +You are "Worker 1", one of three identical worker agents under a boss agent. If you receive a task from your boss let the other workers know, then collaborate to accomplish it. Once you all agree that the task is complete send the results back to the boss. INSTRUCTIONS - Complete the task in your mission. -- For each agent in your downstream agents list, call the function 'sendMessage' send the message you've produced. Set the 'recipient' as the name of the downstream agent. -- Downstream agents: ['Random case'] +- To talk to other worker agents call the function 'broadcast'. At the beginning of the message identify yourself. +- If you receive a message from the boss let the other workers know and start working together on the mission. Make sure to pass the task id provided by the boss. +- If you receive a message from other workers don't reply back unless necessary. Keep the worker channel as free from noise as possible. Share results in the channel to advance the mission, but do not send acknowledgements. +- Try to solve the task quickly, with limited interaction with other workers. +- To send the task results back to the boss call the function 'resolveTask'. Pass the id recieved from the boss when the task was assigned. +- Channels: [{'name': 'Worker', 'agents': ['Worker 1', 'Worker 2', 'Worker 3']}] -# Random case +# Worker 2 MISSION -Take the input message and convert it to random case. +You are "Worker 2", one of three identical worker agents under a boss agent. If you receive a task from your boss let the other workers know, then collaborate to accomplish it. Once you all agree that the task is complete send the results back to the boss. INSTRUCTIONS - Complete the task in your mission. -- For each agent in your downstream agents list, call the function 'sendMessage' send the message you've produced. Set the 'recipient' as the name of the downstream agent. -- Downstream agents: [] \ No newline at end of file +- To talk to other worker agents call the function 'broadcast'. At the beginning of the message identify yourself. +- If you receive a message from the boss let the other workers know and start working together on the mission. +- If you receive a message from other workers don't reply back unless necessary. Keep the worker channel as free from noise as possible. Share results in the channel to advance the mission, but do not send acknowledgements. +- Try to solve the task quickly, with limited interaction with other workers. +- To send the task results back to the boss call the function 'resolveTask'. +- Channels: [{'name': 'Worker', 'agents': ['Worker 1', 'Worker 2', 'Worker 3']}] + +# Worker 3 +MISSION +You are "Worker 3", one of three identical worker agents under a boss agent. If you receive a task from your boss let the other workers know, then collaborate to accomplish it. Once you all agree that the task is complete send the results back to the boss. + +INSTRUCTIONS +- Complete the task in your mission. +- To talk to other worker agents call the function 'broadcast'. At the beginning of the message identify yourself. +- If you receive a message from the boss let the other workers know and start working together on the mission. +- If you receive a message from other workers don't reply back unless necessary. Keep the worker channel as free from noise as possible. Share results in the channel to advance the mission, but do not send acknowledgements. +- Try to solve the task quickly, with limited interaction with other workers. +- To send the task results back to the boss call the function 'resolveTask'. +- Channels: [{'name': 'Worker', 'agents': ['Worker 1', 'Worker 2', 'Worker 3']}] \ No newline at end of file From 0be7d03a96413bb110812eec7812e547dea6d921 Mon Sep 17 00:00:00 2001 From: FireMMDC Date: Sat, 11 Nov 2023 20:10:20 -0500 Subject: [PATCH 036/141] Adding a settings.json file to maintain the additional parameters available during the Assistant creation process for an Agent, updated create.py to use model and tools properties present in the new settings.json file --- .../settings.json | 6 +++++ agent_builder/create.py | 22 ++++++++++++------- 2 files changed, 20 insertions(+), 8 deletions(-) create mode 100644 agent_builder/agents/Autonomous Swarm Agent Builder/settings.json diff --git a/agent_builder/agents/Autonomous Swarm Agent Builder/settings.json b/agent_builder/agents/Autonomous Swarm Agent Builder/settings.json new file mode 100644 index 0000000..fea15cf --- /dev/null +++ b/agent_builder/agents/Autonomous Swarm Agent Builder/settings.json @@ -0,0 +1,6 @@ +{ + "model": "gpt-4-1106-preview", + "description": "Foundation Swarm Builder", + "tools": [{ "type": "code_interpreter" }], + "metadata": {} +} diff --git a/agent_builder/create.py b/agent_builder/create.py index f842678..e94b9f9 100644 --- a/agent_builder/create.py +++ b/agent_builder/create.py @@ -1,5 +1,6 @@ from openai import OpenAI import os +import json import dotenv dotenv.load_dotenv() @@ -41,7 +42,14 @@ if os.path.isfile(instructions_file_path): with open(instructions_file_path, 'r') as f: instructions = f.read() - + + # Read contents from the 'settings.json' file + settings = {} + settings_file_path = os.path.join(agent_folder, 'settings.json') + if os.path.isfile(settings_file_path): + with open(settings_file_path, 'r') as f: + settings = json.load(f) + # Check for the 'files' subfolder and process its contents files = [] files_folder = os.path.join(agent_folder, 'files') @@ -54,9 +62,7 @@ with open(file_path, 'rb') as file_data: # Upload each file to OpenAI file_object = client.files.create(file=file_data, purpose='assistants') - files.append({"name": filename, "id": file_object.id}) - - model = 'gpt-4-1106-preview' + files.append({"name": filename, "id": file_object.id}) print(agent_name) print("") @@ -69,7 +75,7 @@ if existing_agent: print(f"{agent_name} already exists... validating properties") - update_model = existing_agent.model != model + update_model = existing_agent.model != settings["model"] update_instructions = existing_agent.instructions != instructions #need to evaluate tools @@ -79,7 +85,7 @@ existing_files_set = set(existing_files.keys()) if update_model: - update_params["model"] = model + update_params["model"] = settings["model"] if update_instructions: update_params["instructions"] = instructions if files or requested_files_set != existing_files_set: @@ -104,8 +110,8 @@ create_params = { "name": agent_name, "instructions": instructions, - "model": model, - "tools": [{'type': 'code_interpreter'}] + "model": settings["model"], + "tools": settings["tools"] } # Only include 'file_ids' if there are files From eac0bf74eb179f4fdc1d8cd80f7a7950ab3e8ada Mon Sep 17 00:00:00 2001 From: FireMMDC Date: Sat, 11 Nov 2023 20:10:20 -0500 Subject: [PATCH 037/141] Adding a settings.json file to maintain the additional parameters available during the Assistant creation process for an Agent, updated create.py to use model and tools properties present in the new settings.json file --- .../settings.json | 6 +++++ agent_builder/create.py | 22 ++++++++++++------- 2 files changed, 20 insertions(+), 8 deletions(-) create mode 100644 agent_builder/agents/Autonomous Swarm Agent Builder/settings.json diff --git a/agent_builder/agents/Autonomous Swarm Agent Builder/settings.json b/agent_builder/agents/Autonomous Swarm Agent Builder/settings.json new file mode 100644 index 0000000..fea15cf --- /dev/null +++ b/agent_builder/agents/Autonomous Swarm Agent Builder/settings.json @@ -0,0 +1,6 @@ +{ + "model": "gpt-4-1106-preview", + "description": "Foundation Swarm Builder", + "tools": [{ "type": "code_interpreter" }], + "metadata": {} +} diff --git a/agent_builder/create.py b/agent_builder/create.py index f842678..e94b9f9 100644 --- a/agent_builder/create.py +++ b/agent_builder/create.py @@ -1,5 +1,6 @@ from openai import OpenAI import os +import json import dotenv dotenv.load_dotenv() @@ -41,7 +42,14 @@ if os.path.isfile(instructions_file_path): with open(instructions_file_path, 'r') as f: instructions = f.read() - + + # Read contents from the 'settings.json' file + settings = {} + settings_file_path = os.path.join(agent_folder, 'settings.json') + if os.path.isfile(settings_file_path): + with open(settings_file_path, 'r') as f: + settings = json.load(f) + # Check for the 'files' subfolder and process its contents files = [] files_folder = os.path.join(agent_folder, 'files') @@ -54,9 +62,7 @@ with open(file_path, 'rb') as file_data: # Upload each file to OpenAI file_object = client.files.create(file=file_data, purpose='assistants') - files.append({"name": filename, "id": file_object.id}) - - model = 'gpt-4-1106-preview' + files.append({"name": filename, "id": file_object.id}) print(agent_name) print("") @@ -69,7 +75,7 @@ if existing_agent: print(f"{agent_name} already exists... validating properties") - update_model = existing_agent.model != model + update_model = existing_agent.model != settings["model"] update_instructions = existing_agent.instructions != instructions #need to evaluate tools @@ -79,7 +85,7 @@ existing_files_set = set(existing_files.keys()) if update_model: - update_params["model"] = model + update_params["model"] = settings["model"] if update_instructions: update_params["instructions"] = instructions if files or requested_files_set != existing_files_set: @@ -104,8 +110,8 @@ create_params = { "name": agent_name, "instructions": instructions, - "model": model, - "tools": [{'type': 'code_interpreter'}] + "model": settings["model"], + "tools": settings["tools"] } # Only include 'file_ids' if there are files From 62d744d92ad2463e69cae959b983a7d992b5dda6 Mon Sep 17 00:00:00 2001 From: Oliver Wiggins-Hay Date: Sun, 12 Nov 2023 04:04:07 +0000 Subject: [PATCH 038/141] Refined Tool Function Chain Implementation of the unit architecture --- tool_maker/README.md | 7 +- .../assistant_manager.cpython-39.pyc | Bin 0 -> 4322 bytes .../__pycache__/chat_manager.cpython-39.pyc | Bin 0 -> 5472 bytes .../__pycache__/tool_manager.cpython-39.pyc | Bin 0 -> 1155 bytes tool_maker/assistant_manager.py | 105 ++++++++++ tool_maker/chat_manager.py | 195 ++++++++++++++++++ tool_maker/tool_maker_demo.py | 140 ------------- tool_maker/tool_manager.py | 28 +++ tool_maker/unit_manager.py | 39 ++++ 9 files changed, 368 insertions(+), 146 deletions(-) create mode 100644 tool_maker/__pycache__/assistant_manager.cpython-39.pyc create mode 100644 tool_maker/__pycache__/chat_manager.cpython-39.pyc create mode 100644 tool_maker/__pycache__/tool_manager.cpython-39.pyc create mode 100644 tool_maker/assistant_manager.py create mode 100644 tool_maker/chat_manager.py delete mode 100644 tool_maker/tool_maker_demo.py create mode 100644 tool_maker/tool_manager.py create mode 100644 tool_maker/unit_manager.py diff --git a/tool_maker/README.md b/tool_maker/README.md index 32f1be3..eaf0c39 100644 --- a/tool_maker/README.md +++ b/tool_maker/README.md @@ -5,7 +5,7 @@ This preliminary experiment has to do with getting the OpenAI Assistant endpoint This function is as-yet undefined. # Version 1 -run the ```tool_maker_demo.py``` file. +run the ```unit_manager.py``` file. You will be prompted to define a tool for creation. The assistant will then generate an OpenAI tool compatible JSON schema defining the name of the new function, it's description and the input argument schema. It will proceed to add this tool to the current assistant. @@ -51,8 +51,3 @@ Note: Remember to prioritize user requirements and emphasize clear communication ## Flowchart ![[imgs/flow.png]] - -# Next Steps -This process allows the assistant to request a function with some parameters, and allows for the addition of that function to the assistants tool, but no such function is ever actually made. - -A function object needs to be saved that a function_finder can look for to run the output parameters of the assistant, even the output in the example image above is a hallucination. \ No newline at end of file diff --git a/tool_maker/__pycache__/assistant_manager.cpython-39.pyc b/tool_maker/__pycache__/assistant_manager.cpython-39.pyc new file mode 100644 index 0000000000000000000000000000000000000000..5af6ff3cecec32396f1e7a3e3ba6160d6efdd33d GIT binary patch literal 4322 zcmb_f-EQ2*73Pp!?rOD?e;OB6;-*t2PP?h4*hSDE!*F9XX&@Jo-6}xqtpZ}inU%QW zlJt-(WmV})xiyNSFCbm|A^IY`?M?d%DO&eCLoT(u777O~7aR_EX3m_S@0|H4mX_KU zuD`$aME&NfW&Mp_rY{FCAK}hYG}4kRvWBe381?PQ?%AJO(vj|0mUK_;o}*mZ*td0r zQ7>u?y`ERiZ1$R}dDfE6bJkl>o?4Kt=d9OOj%v#V^cL0PS?Ap9EvaR-bV~B=Evwu9 z2X?i(tXTX!Nutk0EOwRN4Cv44%f`z`xU-+3QC5#ht7l8rbEGYuudJRcp%$JE>B%OZ zo@~hlJe#sD7x8S#4mMjVu0G7NFv~@pS6hj{Li8Wo9M@_rhHCX5U+s_LAPPyz*J~# znJb+^Ua5ppBp;_nkz{9}g30z~B-Kf(bRMd?6_x52GG^ptxenvqYF8SeYjw^X62X%` z&-c`YqQ-EVeNy`1l7;`vp6bgK1}a%^$amD9I0%!GUcGedEZ9>+G2<c>gf!vbQ9M&R z6j6Aj)#kG3SPerhg z#P2$Vo2jU;8_>LP`(dPBSb7OBUw_=cx4oI=_$jr!57W44yu%ngv%CJ6}8!(zzUF2XFw{h?Qonvc& znz@2%MuiA(Y+w)I+;i(=>y#ZZef!utYn)prEdH=8QuA|8?8!d^@0j{V9wd6t*$FcN zY#uWNq51WL-zucCIzXaoZv3xI`I&gRgfU81iPjLfu4)DU^M)>N|> z5YlbJzK;wbq-!u0t60g^Fio`3V}F8U{}9Kl(lZ*qE`~MX`!ZnY8yL8Udji^@U+SMA zk;toyfc;;ws|nkeDo2t01u0j<^&6{KqgJm#5Zp@zR=-KjTh!b{Q`|7%oHtUxO%n)& zUAF`z)q%1Ce2;omo~O#d8-%2JAwR{OjJC6Em$l|F>6Mtg^;$lk2|N|9CT>%VAEA?5 zXE^1~oZKBW&e=KJC8$~hZ_pgH1`Fu5sZ^V%T|J%Z>Eg*`<9UOC2nc$cMPryq6%~t9 zM5*Y6F*wG^@Waehn{GqDi>>u7YAE%}8q}aH&$^46(AirtIp5dG@B+!@sjlf1DjIYm zW||BtZd4rPVkRw~3mJ}BI)0TYI+IfI1KOQJH5U~9UHrF!yQ&H$_>b_Tk#yyLW!FmTGkn zszdPzGKe%EhEW6>$U$`vLJ1-!IFW`4=K@0o;V#eq`g8-uP9je3rNM_!Hh7$X-SNE< z4u3uZA6w_2gY_9fW`iJ!b3yzKn}u=7;ZTja*h#<(r#Y};@@7q}pbf~pDM38UiT|Zz zF#nhX@M@huov4pOKGHEy;s}%w`&P3iMXVI5L1&<@gsxmj8Pa!!h&kU7}G7g1_%a&#d7>hv_j7S_;S|>UyH5-mH2#yZLIOE|kjTA9o zqn-%z5q2((_*BK9uK`aZkgUBVlK4C!x{+pTBoi~KiV*;~U#?HXb-vLzDjO#Sks8)>(-XHdp!Hr)jn+Y`tYt=c2sNC($aV{vLPML1VEtv+4IQuW|I=<%t7v zGv~!-cYzovaWLO6+P;taGK#3*@%?9rKslnj5U;OL!>M_P8p>cz8cb(;>j_`}ofG%G4>AA3ynIlj!r44gc4lJjF;;q@lmB#$9Hd z)VbwK78WkFa0i5vv!W{(xTWBr80We-$6}JP3mSz|c5hh# literal 0 HcmV?d00001 diff --git a/tool_maker/__pycache__/chat_manager.cpython-39.pyc b/tool_maker/__pycache__/chat_manager.cpython-39.pyc new file mode 100644 index 0000000000000000000000000000000000000000..cb43c42a0e1dc4f0455b1ee1e70249aee7353050 GIT binary patch literal 5472 zcmcIoTW=f372eq!mn)LGT8>huUG$Qcg{wFhB&h2qaqJ{Dnka7NG&QS$UU8PvQsgc@ zvy>ucnHL#oTEJ}r^dUf;LMrm2pV7auukB0zLE*G^zcb5~C`SqUQWA4^?sI15obP%qUjt!e+D!sO3D;bkPrA-Kj_SL?9`W0cpsdPB$4=o&q{kz{ruMqVO>W)MxV5J@rn$bV)$K2^l2&J#{Z>DS zE?-+gcIrkPci;3QedBp1&1{wH8!CRvpoDe>ZcQ%`gg+W^)SDNu?4S zE=(xATy$tL^eBFmH<6USFbFNxI$DQy^bQU*F}c3SxN%S4VBEaRa0u2)W)5T^hW0DL zMi{xY84-7btkjC51e)UZoL(R$u1?l1wa3g{4-!8sC7UAfxx~5QnFN^?M*Tr@$b^G+ z)iLCPjxBp4@6^l(HQOTYHECh&5`${!BI*1z=CazuVSRF=-#GFsyFa=unzPvbzHxvX z6D3sDO|=OzgS?m}F-Ky71YO;P&hz7ygI+(`9cw<1F2!|x=M*g``<}2QTANNn-OOK+ zN#Kop@WkO8DWe-2>4}k;`xe(z{Zs8T{eY$Vo<1^CqpgKnV(*t!Kj(mb&i=$kCQ4;) zq$c&{<~@C6y`x1}O)YVEKVjO)#?0qcA1j$k?LGFX#_iPZRPZc)W)&lD!j?z&2X+K6 zr{@Ljf|gpa3}-Fxlrl`A){y|pr&?e8X=anx)NqE-^dkz5{@L?8#< z>;s3?1G`Q2_aZXcb|F6$*_ zhnY$GG^)jtO|_!ljxsw}Jah2uV=FM=K!{j8Y6VY-LDVevO=f#ysztJga}XMHn66Td z=?_fvF}BERy2%zFoHI|_^Eb{{H?w)1Q6SpDjHWMTC=(xNHgjB=yHK`J=;3!K`pr(% z<{ewguSi4urjf`B4J7#&pv6aQ_)4PfGgt~R;((28l+4sVV1Lj)goQ34P9PcFyrZRh z$GQsK-_}I;_gb5ASkWD)s2!EyZ8k-XP8oIXhL)E8#y-?OX579D>yEsyv}3->9^>|@ zPNh>#t&W@2KE!Ig#GShaBVVT3r#jPlYic4g0s8M6BL{6WsgqiK3M-Y<^8PHZ957y8 z(vYhmH@&2d+~fU>rjj{6lTN`GWPb74ZZvNDzRZ05}3Sy1G>)Nsxs@*50j*==uQjAM3YGz#PHY zp?*BYp`Pi>7aG;R2)4s`Ae-c;kD9rAsS8tU#l8NE&(iD^MQmMvIDBCe^HK=2@p8?d^ogCZ@uGpzUo1yF0PqP>y>%_M$lP`Xb&AIYvjjxo+_QKohOJF6 zl&TLdnh-SLYGd1R$h~!FnQs+S%UziSy=CtPur$Dw3irKd)iOI_#N!=#XoyR6gg=GI zDq$oOG039=?!dqvGCM!)VfogrTloxM5J4FlC<~Q4r$UCJKu;AITf~;9h6XVqwlgfh zF9)p_Sjb46r4^S*JWk?U5cRWyd?H(rt<9k4=iH(I_1p`{;<6>RmSP?SNH+5!;v&sO zIH_ns<{lXk*v+8N!EVG4s8#U=!cavTG6P!%Um8R*b0q>6JcG7v$$mFXGBXAN&fFal zCP6oh0x6!Pkv}ByBZ&HR&Xnp@@f3**B%UVm3`Ay6XxFF}Is@*pNMnAoJdR+;otpWA z1R$A7)+5f-v_B!CSPprOP|WP?*Z4H)DOVsg$JA@!Wiz_@z%gA&+j5!7+y{<&Rvy@P zg;gHfMg_Gd)Bj^zkcfgdqF{wBupjF)`i!Asp*8&8|9~egy3pcBNJQxhlDrN;l#PVby zk1Z`Xs9&I!Xh;$;%*icfM_ zaSaVEBuPX?(_P$$^U$#tnf}n!aYGMvd$>4xBgb}pf;f<=&~oY$p2vQQOZW(U|LCZT zYcPsyP`!*TUHo$c^b(0ktfB~4wRf+NoCH^qI&>98V~GyuC2nEN2bi~tu&$?|xd`s| zU26?QVpNXYRNJqmpu!0DrTyu&OhNnhD?_3fg+VyVAfQQF;w6v-hCscXhw#KDy=XPc zji1i=S)%7nEz&`R$s*5H^i@LQnIx% z3(rH7Y}jod-p}!_{9Yd`@rHTz+N)Q^Ds@*@xhQT>{s{bzO;Wr~4TO6~it@3djd?j( z@8R`SL29v6_+70+wTJwxlkzj8>)VXgbK#LrQA)3x;tUpr$D0u z>_)k8K$)M+DfXY0Rmdd76qr}RhvopS;03lpC}y%6n?qS47NHi?zpOaqC%UR7Jaa7@ z_~+nX1DfHfpYWeElm2tGUW@on@uRmtO3HmcK?a6;WU9oV6Pw-!>4OJx-K4ZfEVJn2 zbWG|4BLw5-7L3_KTRBBd?i^#+0DC_g2Mf6`p3K4uuTtIQblNNs9eiDUYc3kqwSx>excv5^OiOU`U7Ak9-^| zCS0l4=u;BC^fsHB+Z?}7BVTDYZx8%#-XiG5LeQrsafSp1i`-qymy0l%**DY&pWK%z zuH^e)qug=SQ8`751>6KIc(0lzWxLF_he*;_;UqpNcg5X-a0!rTh!Siv)(U0s&~ip*#9x HY|i*E!uwo^ literal 0 HcmV?d00001 diff --git a/tool_maker/__pycache__/tool_manager.cpython-39.pyc b/tool_maker/__pycache__/tool_manager.cpython-39.pyc new file mode 100644 index 0000000000000000000000000000000000000000..60637ec851e5ad2445e3365e8ca04bc5386d476f GIT binary patch literal 1155 zcmZ`&Piqu06i+hQ?R47W!5*w$!lJ^0RJ<;tmI|WKt*|{U3n86k+tr;vO=c+_cTerN zkb*t>mFDWnuOL!=Z>F;yMKk2({mFaz<^3jVVngox|LP=>!N(GxCMb$rb$!kS@Ii+H6j*=z`7>mY45l>Z5T& ziFbqjzHs5OiIh;W7N=IFPE2Aj#C~exowvfJCY+2XMvDv(2$$Jen9Z!opM46%&FExj zsp<4O<5?VLlO*muxeYCSJ&^~CrN8SH{DVv4qNQh1>WbDhQQCD`(XQYOIUe_w*cG_a zkC)@#sGnrUjrPvOAXT9mRq*7_k%==oI8`vP)izKgHn@#P1kgbEPw*dVD(`2VC9<68Bo-k zdm!6$4szvkWx>l#euY%|`O|}nDxd^!Dy(s75<64UHBM~w7mWpvRb+}aZCq$4DY9YP z04KlOpbNKjdZLp*XfoAEDT9xj!BkmD%8YdoTe67!=fTokbeYAU+hRAmt)X9|9d0hW zZq3QexJ)L8A1t=ruAxAPCDoPi!7Mf_f@H~BUFi-)+&vJ)3#eayjIv*xv1`uRPrsdu zCVBrV(TC^cONmX8vIwM%5n>B-B!pWZC)%#Uk#-#g(%cY;)z+`j iq2*IXgH=XLZ`~5y(a0)`l-Km1SLeULSM6|{@BIc5=^c6i literal 0 HcmV?d00001 diff --git a/tool_maker/assistant_manager.py b/tool_maker/assistant_manager.py new file mode 100644 index 0000000..fe13bf2 --- /dev/null +++ b/tool_maker/assistant_manager.py @@ -0,0 +1,105 @@ +from tool_manager import ToolManager +import json + + +class AssistantManager: + request_function_tool = r"""{ + "name": "function_request", + "description": "request an authority to grant you access to a new function", + "parameters": { + "type": "object", + "properties": { + "name": { + "type": "string", + "description": "name of the function" + }, + "description": { + "type": "string", + "description": "expected function behaviour" + }, + "schema": { + "type": "string", + "description": "the input arguments for the requested function following the JOSN schema in a format ready to be serialized" + } + }, + "required": [ + "name", + "schema" + ] + } + }""" + + def __init__(self, client): + self.client = client + self.assistant = None + with open("tool_maker/tool_creator_metadata.json", "r") as file: + self.assistant_package = json.load(file) + + def get_assistant(self): + """Retrieve or create an assistant for testing this functionality""" + if not self.assistant_package["name"] in [ + assistant.name for assistant in self.client.beta.assistants.list() + ]: + assistant = self.make_tool_creation_assistant() + else: + assistant_dict = { + assistant.name: assistant.id + for assistant in self.client.beta.assistants.list() + } + assistant = self.client.beta.assistants.retrieve( + assistant_id=assistant_dict[self.assistant_package["name"]] + ) + self.assistant = assistant + return assistant + + def get_coding_assistant(self): + """Retrieve or create an assistant for testing this functionality""" + name = "temporary_function_writer" + if not name in [ + assistant.name for assistant in self.client.beta.assistants.list() + ]: + assistant = self.make_coding_assistant() + else: + assistant_dict = { + assistant.name: assistant.id + for assistant in self.client.beta.assistants.list() + } + assistant = self.client.beta.assistants.retrieve( + assistant_id=assistant_dict[name] + ) + self.assistant = assistant + return assistant + + def make_tool_creation_assistant(self): + tools = [ + ToolManager.tool_from_function_schema( + json.loads(AssistantManager.request_function_tool) + ) + ] + assistant = self.client.beta.assistants.create( + model=self.assistant_package["model"], + description=self.assistant_package["description"], + instructions=self.assistant_package["instructions"], + name=self.assistant_package["name"], + tools=tools, + ) + return assistant + + def make_coding_assistant(self): + code_assistant = self.client.beta.assistants.create( + model="gpt-4-1106-preview", + instructions="you will be provided a json schema of an OpenAI function tool from an API not a human user. The json will contain all information about the function you will need to write it in python code. You will return only the python function you wrote and no additional text as you are talking to an API and extraneous output will cause execution errors. You must always implement the actual code. Generic placeholders or pseudo code will break the api. If you need clarification to write real functioning code, request for extra info in arguments without creating a real function or valid schema", + name="temporary_function_writer", + ) + return code_assistant + + +if __name__ == "__main__": + from openai import OpenAI + import os + + apikey = os.getenv("OPENAI_API_KEY") + client = OpenAI(api_key=apikey) + assistant_manager = AssistantManager(client=client) + assistant = assistant_manager.get_assistant() + print(assistant) diff --git a/tool_maker/chat_manager.py b/tool_maker/chat_manager.py new file mode 100644 index 0000000..d1e938d --- /dev/null +++ b/tool_maker/chat_manager.py @@ -0,0 +1,195 @@ +from openai import OpenAI +import importlib +from tool_manager import ToolManager +import json + +Assistant = type(OpenAI().beta.assistants.list().data[0]) +Thread = type(OpenAI().beta.threads.create()) + + +class ChatManager: + def __init__(self, client: OpenAI): + self.client = client + + def create_thread_from_user_input(self): + return self.client.beta.threads.create( + messages=[{"role": "user", "content": input("Begin\n")}] + ) + + def create_empty_thread(self): + return self.client.beta.threads.create() + + def run_python_from_function_name(self, call): + print("CALLING FUNCTION") + try: + function_name = call.function.name + fn = getattr( + importlib.import_module(f"python_functions.{function_name}"), + function_name, + ) + result = fn(**json.loads(call.function.arguments)) + response = {"tool_call_id": call.id, "output": f"result:{result}"} + except Exception as error: + response = { + "tool_call_id": call.id, + "output": f"{{{type(error)}:{error.args}}}", + } + return response + + def handle_fucntion_request( + self, + call, + interface_assistant: Assistant, + interface_thread: Thread, + functional_assistant: Assistant, + functional_thread: Thread, + ): + try: + # Create Function Tool + schema = ToolManager.schema_from_response(call.function.arguments) + tool = ToolManager.tool_from_function_schema(schema) + if tool["function"]["name"] in [ + previous_tool.function.name + for previous_tool in interface_assistant.tools + ]: + tools = [ + previous_tool + for previous_tool in interface_assistant.tools + if previous_tool.function.name != tool["function"]["name"] + ] + interface_assistant = self.client.beta.assistants.update( + assistant_id=interface_assistant.id, + tools=[*tools, tool], + ) + else: + interface_assistant = self.client.beta.assistants.update( + assistant_id=interface_assistant.id, + tools=[*interface_assistant.tools, tool], + ) + + # Generate Python Function + self.client.beta.threads.messages.create( + thread_id=functional_thread.id, content=str(tool), role="user" + ) + functional_run = self.client.beta.threads.runs.create( + thread_id=functional_thread.id, + assistant_id=functional_assistant.id, + instructions="please remember you are talking to an API, minimize output text tokens for cost saving. Also realise that your output text must be directly runnable as a py file so begin with the def keyword and do not provide any text ouput which is not commented to avoid breaking the system. Target python 3 and windows", + ) + functional_response = self.simple_run( + run=functional_run, + thread=functional_thread, + ) + function_lines = functional_response.split("```python")[1].split("```")[0] + name = tool["function"]["name"] + with open(f"tool_maker/python_functions/{name}.py", "w") as file: + file.writelines(function_lines) + + response = {"tool_call_id": call.id, "output": "{success}"} + + except Exception as error: + # If error, pass details back to assistant for next steps + response = { + "tool_call_id": call.id, + "output": f"{{{type(error)}:{error.args}}}", + } + + return interface_assistant, response + + def simple_run(self, run, thread): + """Supply context to assistant and await for next user response""" + while run.status != "completed": + run = self.client.beta.threads.runs.retrieve( + run_id=run.id, thread_id=thread.id + ) + + response = ( + self.client.beta.threads.messages.list(thread_id=thread.id) + .data[0] + .content[0] + .text.value + ) + return response + + def begin_run( + self, + run, + interface_assistant, + interface_thread, + functional_assistant, + functional_thread, + ): + while run.status != "completed": + run = self.client.beta.threads.runs.retrieve( + run_id=run.id, thread_id=interface_thread.id + ) + if run.status == "requires_action": + tools = [] + responses = [] + for call in run.required_action.submit_tool_outputs.tool_calls: + print(f"calling: {call.function.name}") + if call.function.name == "function_request": + interface_assistant, response = self.handle_fucntion_request( + call=call, + interface_assistant=interface_assistant, + interface_thread=interface_thread, + functional_assistant=functional_assistant, + functional_thread=functional_thread, + ) + else: + response = self.run_python_from_function_name(call) + responses.append(response) + try: + run = self.client.beta.threads.runs.submit_tool_outputs( + run_id=run.id, + thread_id=interface_thread.id, + tool_outputs=responses, + ) + except: + print(run.status) + print(run) + print(call) + print(responses) + if run.status == "failed" or run.status == "expired": + print("DIED") + run.status = "completed" + response = ( + self.client.beta.threads.messages.list(thread_id=interface_thread.id) + .data[0] + .content[0] + .text.value + ) + return interface_assistant, response + + def run_unit( + self, + interface_assistant: Assistant, + interface_thread: Thread, + functional_assistant: Assistant, + functional_thread: Thread, + ): + self.client.beta.threads.messages.create( + thread_id=interface_thread.id, content=input("type: "), role="user" + ) + print() + interface_run = self.client.beta.threads.runs.create( + thread_id=interface_thread.id, + assistant_id=interface_assistant.id, + instructions="please remember you are talking to an API, minimize output text tokens for cost saving. You are also able to communicate with the function ai using the description property of function_request.", + ) + interface_assistant, response = self.begin_run( + run=interface_run, + interface_assistant=interface_assistant, + interface_thread=interface_thread, + functional_assistant=functional_assistant, + functional_thread=functional_thread, + ) + interface_thread = self.client.beta.threads.retrieve( + thread_id=interface_thread.id + ) + functional_thread = self.client.beta.threads.retrieve( + thread_id=functional_thread.id + ) + print(response) + print() + return interface_assistant, interface_thread, functional_thread diff --git a/tool_maker/tool_maker_demo.py b/tool_maker/tool_maker_demo.py deleted file mode 100644 index 0627c7d..0000000 --- a/tool_maker/tool_maker_demo.py +++ /dev/null @@ -1,140 +0,0 @@ -from openai import OpenAI -import os -import json - -api_key = os.getenv("OPENAI_API_KEY") -if api_key is None: - raise ValueError("The OPENAI_API_KEY environment variable is not set.") - -client = OpenAI(api_key=api_key) - -request_function_tool = r"""{ - "name": "function_request", - "description": "request an authority to grant you access to a new function", - "parameters": { - "type": "object", - "properties": { - "name": { - "type": "string", - "description": "name of the function" - }, - "description": { - "type": "string", - "description": "expected function behaviour" - }, - "schema": { - "type": "string", - "description": "the input arguments for the requested function following the JOSN schema in a format ready to be serialized" - } - }, - "required": [ - "name", - "schema" - ] - } -}""" - -with open("tool_maker/tool_creator_metadata.json", "r") as file: - assistant_package = json.load(file) - - -def tool_from_function_schema(schema): - """takes a JSON schema and wraps in an OpenAI specified tool structure""" - tool = f"""{{ - "type":"function", - "function": {json.dumps(schema)}}} - """ - tool = json.loads(tool) - return tool - - -def schema_from_response(response): - """Takes an agent response and forms a JSON schema""" - function_request_obj = json.loads(response) - name = function_request_obj["name"] - description = function_request_obj["description"] - schema = function_request_obj["schema"] - schema = rf"""{{ - "name": "{name}", - "description": "{description}", - "parameters": - {schema} - -}}""" - return json.loads(schema) - - -def get_assistant(): - """Retrieve or create an assistant for testing this functionality""" - if not assistant_package["name"] in [ - assistant.name for assistant in client.beta.assistants.list() - ]: - tools = [tool_from_function_schema(request_function_tool)] - assistant = client.beta.assistants.create( - model=assistant_package["model"], - description=assistant_package["description"], - instructions=assistant_package["instructions"], - name=assistant_package["name"], - tools=tools, - ) - else: - assistant_dict = { - assistant.name: assistant.id for assistant in client.beta.assistants.list() - } - assistant = client.beta.assistants.retrieve( - assistant_id=assistant_dict[assistant_package["name"]] - ) - return assistant - - -def run_response(run, assistant, thread): - """Supply context to assistant and await for next user response""" - while run.status != "completed": - run = client.beta.threads.runs.retrieve(run_id=run.id, thread_id=thread.id) - if run.status == "requires_action": - tools = [] - responses = [] - for call in run.required_action.submit_tool_outputs.tool_calls: - print(f"calling: {call.function.name}") - if call.function.name == "function_request": - schema = schema_from_response(call.function.arguments) - tool = tool_from_function_schema(schema) - tools.append(tool) - responses.append({"tool_call_id": call.id, "output": "{success}"}) - assistant = client.beta.assistants.update( - assistant_id=assistant.id, tools=[*assistant.tools, *tools] - ) - run = client.beta.threads.runs.submit_tool_outputs( - run_id=run.id, thread_id=thread.id, tool_outputs=responses - ) - print( - client.beta.threads.messages.list(thread_id=thread.id) - .data[0] - .content[0] - .text.value - ) - print( - [ - tool.function.name - for tool in client.beta.assistants.retrieve(assistant_id=assistant.id).tools - ] - ) - return run, assistant - - -if __name__ == "__main__": - assistant = get_assistant() - thread = client.beta.threads.create( - messages=[{"role": "user", "content": input("Begin\n")}] - ) - - while True: - run = client.beta.threads.runs.create( - thread_id=thread.id, - assistant_id=assistant.id, - instructions="please remember you are talking to an API, minimize output text tokens for cost saving.", - ) - run, assistant = run_response(run, assistant, thread) - client.beta.threads.messages.create( - thread_id=thread.id, content=input("respond: "), role="user" - ) diff --git a/tool_maker/tool_manager.py b/tool_maker/tool_manager.py new file mode 100644 index 0000000..0351df0 --- /dev/null +++ b/tool_maker/tool_manager.py @@ -0,0 +1,28 @@ +import json + + +class ToolManager: + @staticmethod + def tool_from_function_schema(schema): + """takes a JSON schema and wraps in an OpenAI specified tool structure""" + tool = f"""{{ + "type":"function", + "function": {json.dumps(schema)}}} + """ + tool = json.loads(tool) + return tool + + @staticmethod + def schema_from_response(response): + """Takes an agent response and forms a JSON schema""" + function_request_obj = json.loads(response) + name = function_request_obj["name"] + description = function_request_obj["description"] + schema = function_request_obj["schema"] + schema = rf"""{{ + "name": "{name}", + "description": "{description}", + "parameters": + {schema} + }}""" + return json.loads(schema) diff --git a/tool_maker/unit_manager.py b/tool_maker/unit_manager.py new file mode 100644 index 0000000..290b1fd --- /dev/null +++ b/tool_maker/unit_manager.py @@ -0,0 +1,39 @@ +from assistant_manager import AssistantManager +from chat_manager import ChatManager + + +class Unit: + def __init__(self, client): + self.assistant_manager = AssistantManager(client=client) + self.chat_manager = ChatManager(client=client) + self.interface_assistant = self.assistant_manager.get_assistant() + self.functional_assistant = self.assistant_manager.get_coding_assistant() + + self.interface_thread = self.chat_manager.create_empty_thread() + self.functional_thread = self.chat_manager.create_empty_thread() + + def chat(self): + while True: + ( + self.interface_assistant, + self.interface_thread, + self.functional_thread, + ) = self.chat_manager.run_unit( + interface_assistant=self.interface_assistant, + interface_thread=self.interface_thread, + functional_assistant=self.functional_assistant, + functional_thread=self.functional_thread, + ) + + +if __name__ == "__main__": + import os + from openai import OpenAI + + api_key = os.getenv("OPENAI_API_KEY") + if api_key is None: + raise ValueError("The OPENAI_API_KEY environment variable is not set.") + client = OpenAI(api_key=api_key) + + unit = Unit(client=client) + unit.chat() From 032d8f4a9d584181195e47cb649d69c14f061965 Mon Sep 17 00:00:00 2001 From: Oliver Wiggins-Hay Date: Sun, 12 Nov 2023 04:04:07 +0000 Subject: [PATCH 039/141] Refined Tool Function Chain Implementation of the unit architecture --- tool_maker/README.md | 7 +- .../assistant_manager.cpython-39.pyc | Bin 0 -> 4322 bytes .../__pycache__/chat_manager.cpython-39.pyc | Bin 0 -> 5472 bytes .../__pycache__/tool_manager.cpython-39.pyc | Bin 0 -> 1155 bytes tool_maker/assistant_manager.py | 105 ++++++++++ tool_maker/chat_manager.py | 195 ++++++++++++++++++ tool_maker/tool_maker_demo.py | 140 ------------- tool_maker/tool_manager.py | 28 +++ tool_maker/unit_manager.py | 39 ++++ 9 files changed, 368 insertions(+), 146 deletions(-) create mode 100644 tool_maker/__pycache__/assistant_manager.cpython-39.pyc create mode 100644 tool_maker/__pycache__/chat_manager.cpython-39.pyc create mode 100644 tool_maker/__pycache__/tool_manager.cpython-39.pyc create mode 100644 tool_maker/assistant_manager.py create mode 100644 tool_maker/chat_manager.py delete mode 100644 tool_maker/tool_maker_demo.py create mode 100644 tool_maker/tool_manager.py create mode 100644 tool_maker/unit_manager.py diff --git a/tool_maker/README.md b/tool_maker/README.md index 32f1be3..eaf0c39 100644 --- a/tool_maker/README.md +++ b/tool_maker/README.md @@ -5,7 +5,7 @@ This preliminary experiment has to do with getting the OpenAI Assistant endpoint This function is as-yet undefined. # Version 1 -run the ```tool_maker_demo.py``` file. +run the ```unit_manager.py``` file. You will be prompted to define a tool for creation. The assistant will then generate an OpenAI tool compatible JSON schema defining the name of the new function, it's description and the input argument schema. It will proceed to add this tool to the current assistant. @@ -51,8 +51,3 @@ Note: Remember to prioritize user requirements and emphasize clear communication ## Flowchart ![[imgs/flow.png]] - -# Next Steps -This process allows the assistant to request a function with some parameters, and allows for the addition of that function to the assistants tool, but no such function is ever actually made. - -A function object needs to be saved that a function_finder can look for to run the output parameters of the assistant, even the output in the example image above is a hallucination. \ No newline at end of file diff --git a/tool_maker/__pycache__/assistant_manager.cpython-39.pyc b/tool_maker/__pycache__/assistant_manager.cpython-39.pyc new file mode 100644 index 0000000000000000000000000000000000000000..5af6ff3cecec32396f1e7a3e3ba6160d6efdd33d GIT binary patch literal 4322 zcmb_f-EQ2*73Pp!?rOD?e;OB6;-*t2PP?h4*hSDE!*F9XX&@Jo-6}xqtpZ}inU%QW zlJt-(WmV})xiyNSFCbm|A^IY`?M?d%DO&eCLoT(u777O~7aR_EX3m_S@0|H4mX_KU zuD`$aME&NfW&Mp_rY{FCAK}hYG}4kRvWBe381?PQ?%AJO(vj|0mUK_;o}*mZ*td0r zQ7>u?y`ERiZ1$R}dDfE6bJkl>o?4Kt=d9OOj%v#V^cL0PS?Ap9EvaR-bV~B=Evwu9 z2X?i(tXTX!Nutk0EOwRN4Cv44%f`z`xU-+3QC5#ht7l8rbEGYuudJRcp%$JE>B%OZ zo@~hlJe#sD7x8S#4mMjVu0G7NFv~@pS6hj{Li8Wo9M@_rhHCX5U+s_LAPPyz*J~# znJb+^Ua5ppBp;_nkz{9}g30z~B-Kf(bRMd?6_x52GG^ptxenvqYF8SeYjw^X62X%` z&-c`YqQ-EVeNy`1l7;`vp6bgK1}a%^$amD9I0%!GUcGedEZ9>+G2<c>gf!vbQ9M&R z6j6Aj)#kG3SPerhg z#P2$Vo2jU;8_>LP`(dPBSb7OBUw_=cx4oI=_$jr!57W44yu%ngv%CJ6}8!(zzUF2XFw{h?Qonvc& znz@2%MuiA(Y+w)I+;i(=>y#ZZef!utYn)prEdH=8QuA|8?8!d^@0j{V9wd6t*$FcN zY#uWNq51WL-zucCIzXaoZv3xI`I&gRgfU81iPjLfu4)DU^M)>N|> z5YlbJzK;wbq-!u0t60g^Fio`3V}F8U{}9Kl(lZ*qE`~MX`!ZnY8yL8Udji^@U+SMA zk;toyfc;;ws|nkeDo2t01u0j<^&6{KqgJm#5Zp@zR=-KjTh!b{Q`|7%oHtUxO%n)& zUAF`z)q%1Ce2;omo~O#d8-%2JAwR{OjJC6Em$l|F>6Mtg^;$lk2|N|9CT>%VAEA?5 zXE^1~oZKBW&e=KJC8$~hZ_pgH1`Fu5sZ^V%T|J%Z>Eg*`<9UOC2nc$cMPryq6%~t9 zM5*Y6F*wG^@Waehn{GqDi>>u7YAE%}8q}aH&$^46(AirtIp5dG@B+!@sjlf1DjIYm zW||BtZd4rPVkRw~3mJ}BI)0TYI+IfI1KOQJH5U~9UHrF!yQ&H$_>b_Tk#yyLW!FmTGkn zszdPzGKe%EhEW6>$U$`vLJ1-!IFW`4=K@0o;V#eq`g8-uP9je3rNM_!Hh7$X-SNE< z4u3uZA6w_2gY_9fW`iJ!b3yzKn}u=7;ZTja*h#<(r#Y};@@7q}pbf~pDM38UiT|Zz zF#nhX@M@huov4pOKGHEy;s}%w`&P3iMXVI5L1&<@gsxmj8Pa!!h&kU7}G7g1_%a&#d7>hv_j7S_;S|>UyH5-mH2#yZLIOE|kjTA9o zqn-%z5q2((_*BK9uK`aZkgUBVlK4C!x{+pTBoi~KiV*;~U#?HXb-vLzDjO#Sks8)>(-XHdp!Hr)jn+Y`tYt=c2sNC($aV{vLPML1VEtv+4IQuW|I=<%t7v zGv~!-cYzovaWLO6+P;taGK#3*@%?9rKslnj5U;OL!>M_P8p>cz8cb(;>j_`}ofG%G4>AA3ynIlj!r44gc4lJjF;;q@lmB#$9Hd z)VbwK78WkFa0i5vv!W{(xTWBr80We-$6}JP3mSz|c5hh# literal 0 HcmV?d00001 diff --git a/tool_maker/__pycache__/chat_manager.cpython-39.pyc b/tool_maker/__pycache__/chat_manager.cpython-39.pyc new file mode 100644 index 0000000000000000000000000000000000000000..cb43c42a0e1dc4f0455b1ee1e70249aee7353050 GIT binary patch literal 5472 zcmcIoTW=f372eq!mn)LGT8>huUG$Qcg{wFhB&h2qaqJ{Dnka7NG&QS$UU8PvQsgc@ zvy>ucnHL#oTEJ}r^dUf;LMrm2pV7auukB0zLE*G^zcb5~C`SqUQWA4^?sI15obP%qUjt!e+D!sO3D;bkPrA-Kj_SL?9`W0cpsdPB$4=o&q{kz{ruMqVO>W)MxV5J@rn$bV)$K2^l2&J#{Z>DS zE?-+gcIrkPci;3QedBp1&1{wH8!CRvpoDe>ZcQ%`gg+W^)SDNu?4S zE=(xATy$tL^eBFmH<6USFbFNxI$DQy^bQU*F}c3SxN%S4VBEaRa0u2)W)5T^hW0DL zMi{xY84-7btkjC51e)UZoL(R$u1?l1wa3g{4-!8sC7UAfxx~5QnFN^?M*Tr@$b^G+ z)iLCPjxBp4@6^l(HQOTYHECh&5`${!BI*1z=CazuVSRF=-#GFsyFa=unzPvbzHxvX z6D3sDO|=OzgS?m}F-Ky71YO;P&hz7ygI+(`9cw<1F2!|x=M*g``<}2QTANNn-OOK+ zN#Kop@WkO8DWe-2>4}k;`xe(z{Zs8T{eY$Vo<1^CqpgKnV(*t!Kj(mb&i=$kCQ4;) zq$c&{<~@C6y`x1}O)YVEKVjO)#?0qcA1j$k?LGFX#_iPZRPZc)W)&lD!j?z&2X+K6 zr{@Ljf|gpa3}-Fxlrl`A){y|pr&?e8X=anx)NqE-^dkz5{@L?8#< z>;s3?1G`Q2_aZXcb|F6$*_ zhnY$GG^)jtO|_!ljxsw}Jah2uV=FM=K!{j8Y6VY-LDVevO=f#ysztJga}XMHn66Td z=?_fvF}BERy2%zFoHI|_^Eb{{H?w)1Q6SpDjHWMTC=(xNHgjB=yHK`J=;3!K`pr(% z<{ewguSi4urjf`B4J7#&pv6aQ_)4PfGgt~R;((28l+4sVV1Lj)goQ34P9PcFyrZRh z$GQsK-_}I;_gb5ASkWD)s2!EyZ8k-XP8oIXhL)E8#y-?OX579D>yEsyv}3->9^>|@ zPNh>#t&W@2KE!Ig#GShaBVVT3r#jPlYic4g0s8M6BL{6WsgqiK3M-Y<^8PHZ957y8 z(vYhmH@&2d+~fU>rjj{6lTN`GWPb74ZZvNDzRZ05}3Sy1G>)Nsxs@*50j*==uQjAM3YGz#PHY zp?*BYp`Pi>7aG;R2)4s`Ae-c;kD9rAsS8tU#l8NE&(iD^MQmMvIDBCe^HK=2@p8?d^ogCZ@uGpzUo1yF0PqP>y>%_M$lP`Xb&AIYvjjxo+_QKohOJF6 zl&TLdnh-SLYGd1R$h~!FnQs+S%UziSy=CtPur$Dw3irKd)iOI_#N!=#XoyR6gg=GI zDq$oOG039=?!dqvGCM!)VfogrTloxM5J4FlC<~Q4r$UCJKu;AITf~;9h6XVqwlgfh zF9)p_Sjb46r4^S*JWk?U5cRWyd?H(rt<9k4=iH(I_1p`{;<6>RmSP?SNH+5!;v&sO zIH_ns<{lXk*v+8N!EVG4s8#U=!cavTG6P!%Um8R*b0q>6JcG7v$$mFXGBXAN&fFal zCP6oh0x6!Pkv}ByBZ&HR&Xnp@@f3**B%UVm3`Ay6XxFF}Is@*pNMnAoJdR+;otpWA z1R$A7)+5f-v_B!CSPprOP|WP?*Z4H)DOVsg$JA@!Wiz_@z%gA&+j5!7+y{<&Rvy@P zg;gHfMg_Gd)Bj^zkcfgdqF{wBupjF)`i!Asp*8&8|9~egy3pcBNJQxhlDrN;l#PVby zk1Z`Xs9&I!Xh;$;%*icfM_ zaSaVEBuPX?(_P$$^U$#tnf}n!aYGMvd$>4xBgb}pf;f<=&~oY$p2vQQOZW(U|LCZT zYcPsyP`!*TUHo$c^b(0ktfB~4wRf+NoCH^qI&>98V~GyuC2nEN2bi~tu&$?|xd`s| zU26?QVpNXYRNJqmpu!0DrTyu&OhNnhD?_3fg+VyVAfQQF;w6v-hCscXhw#KDy=XPc zji1i=S)%7nEz&`R$s*5H^i@LQnIx% z3(rH7Y}jod-p}!_{9Yd`@rHTz+N)Q^Ds@*@xhQT>{s{bzO;Wr~4TO6~it@3djd?j( z@8R`SL29v6_+70+wTJwxlkzj8>)VXgbK#LrQA)3x;tUpr$D0u z>_)k8K$)M+DfXY0Rmdd76qr}RhvopS;03lpC}y%6n?qS47NHi?zpOaqC%UR7Jaa7@ z_~+nX1DfHfpYWeElm2tGUW@on@uRmtO3HmcK?a6;WU9oV6Pw-!>4OJx-K4ZfEVJn2 zbWG|4BLw5-7L3_KTRBBd?i^#+0DC_g2Mf6`p3K4uuTtIQblNNs9eiDUYc3kqwSx>excv5^OiOU`U7Ak9-^| zCS0l4=u;BC^fsHB+Z?}7BVTDYZx8%#-XiG5LeQrsafSp1i`-qymy0l%**DY&pWK%z zuH^e)qug=SQ8`751>6KIc(0lzWxLF_he*;_;UqpNcg5X-a0!rTh!Siv)(U0s&~ip*#9x HY|i*E!uwo^ literal 0 HcmV?d00001 diff --git a/tool_maker/__pycache__/tool_manager.cpython-39.pyc b/tool_maker/__pycache__/tool_manager.cpython-39.pyc new file mode 100644 index 0000000000000000000000000000000000000000..60637ec851e5ad2445e3365e8ca04bc5386d476f GIT binary patch literal 1155 zcmZ`&Piqu06i+hQ?R47W!5*w$!lJ^0RJ<;tmI|WKt*|{U3n86k+tr;vO=c+_cTerN zkb*t>mFDWnuOL!=Z>F;yMKk2({mFaz<^3jVVngox|LP=>!N(GxCMb$rb$!kS@Ii+H6j*=z`7>mY45l>Z5T& ziFbqjzHs5OiIh;W7N=IFPE2Aj#C~exowvfJCY+2XMvDv(2$$Jen9Z!opM46%&FExj zsp<4O<5?VLlO*muxeYCSJ&^~CrN8SH{DVv4qNQh1>WbDhQQCD`(XQYOIUe_w*cG_a zkC)@#sGnrUjrPvOAXT9mRq*7_k%==oI8`vP)izKgHn@#P1kgbEPw*dVD(`2VC9<68Bo-k zdm!6$4szvkWx>l#euY%|`O|}nDxd^!Dy(s75<64UHBM~w7mWpvRb+}aZCq$4DY9YP z04KlOpbNKjdZLp*XfoAEDT9xj!BkmD%8YdoTe67!=fTokbeYAU+hRAmt)X9|9d0hW zZq3QexJ)L8A1t=ruAxAPCDoPi!7Mf_f@H~BUFi-)+&vJ)3#eayjIv*xv1`uRPrsdu zCVBrV(TC^cONmX8vIwM%5n>B-B!pWZC)%#Uk#-#g(%cY;)z+`j iq2*IXgH=XLZ`~5y(a0)`l-Km1SLeULSM6|{@BIc5=^c6i literal 0 HcmV?d00001 diff --git a/tool_maker/assistant_manager.py b/tool_maker/assistant_manager.py new file mode 100644 index 0000000..fe13bf2 --- /dev/null +++ b/tool_maker/assistant_manager.py @@ -0,0 +1,105 @@ +from tool_manager import ToolManager +import json + + +class AssistantManager: + request_function_tool = r"""{ + "name": "function_request", + "description": "request an authority to grant you access to a new function", + "parameters": { + "type": "object", + "properties": { + "name": { + "type": "string", + "description": "name of the function" + }, + "description": { + "type": "string", + "description": "expected function behaviour" + }, + "schema": { + "type": "string", + "description": "the input arguments for the requested function following the JOSN schema in a format ready to be serialized" + } + }, + "required": [ + "name", + "schema" + ] + } + }""" + + def __init__(self, client): + self.client = client + self.assistant = None + with open("tool_maker/tool_creator_metadata.json", "r") as file: + self.assistant_package = json.load(file) + + def get_assistant(self): + """Retrieve or create an assistant for testing this functionality""" + if not self.assistant_package["name"] in [ + assistant.name for assistant in self.client.beta.assistants.list() + ]: + assistant = self.make_tool_creation_assistant() + else: + assistant_dict = { + assistant.name: assistant.id + for assistant in self.client.beta.assistants.list() + } + assistant = self.client.beta.assistants.retrieve( + assistant_id=assistant_dict[self.assistant_package["name"]] + ) + self.assistant = assistant + return assistant + + def get_coding_assistant(self): + """Retrieve or create an assistant for testing this functionality""" + name = "temporary_function_writer" + if not name in [ + assistant.name for assistant in self.client.beta.assistants.list() + ]: + assistant = self.make_coding_assistant() + else: + assistant_dict = { + assistant.name: assistant.id + for assistant in self.client.beta.assistants.list() + } + assistant = self.client.beta.assistants.retrieve( + assistant_id=assistant_dict[name] + ) + self.assistant = assistant + return assistant + + def make_tool_creation_assistant(self): + tools = [ + ToolManager.tool_from_function_schema( + json.loads(AssistantManager.request_function_tool) + ) + ] + assistant = self.client.beta.assistants.create( + model=self.assistant_package["model"], + description=self.assistant_package["description"], + instructions=self.assistant_package["instructions"], + name=self.assistant_package["name"], + tools=tools, + ) + return assistant + + def make_coding_assistant(self): + code_assistant = self.client.beta.assistants.create( + model="gpt-4-1106-preview", + instructions="you will be provided a json schema of an OpenAI function tool from an API not a human user. The json will contain all information about the function you will need to write it in python code. You will return only the python function you wrote and no additional text as you are talking to an API and extraneous output will cause execution errors. You must always implement the actual code. Generic placeholders or pseudo code will break the api. If you need clarification to write real functioning code, request for extra info in arguments without creating a real function or valid schema", + name="temporary_function_writer", + ) + return code_assistant + + +if __name__ == "__main__": + from openai import OpenAI + import os + + apikey = os.getenv("OPENAI_API_KEY") + client = OpenAI(api_key=apikey) + assistant_manager = AssistantManager(client=client) + assistant = assistant_manager.get_assistant() + print(assistant) diff --git a/tool_maker/chat_manager.py b/tool_maker/chat_manager.py new file mode 100644 index 0000000..d1e938d --- /dev/null +++ b/tool_maker/chat_manager.py @@ -0,0 +1,195 @@ +from openai import OpenAI +import importlib +from tool_manager import ToolManager +import json + +Assistant = type(OpenAI().beta.assistants.list().data[0]) +Thread = type(OpenAI().beta.threads.create()) + + +class ChatManager: + def __init__(self, client: OpenAI): + self.client = client + + def create_thread_from_user_input(self): + return self.client.beta.threads.create( + messages=[{"role": "user", "content": input("Begin\n")}] + ) + + def create_empty_thread(self): + return self.client.beta.threads.create() + + def run_python_from_function_name(self, call): + print("CALLING FUNCTION") + try: + function_name = call.function.name + fn = getattr( + importlib.import_module(f"python_functions.{function_name}"), + function_name, + ) + result = fn(**json.loads(call.function.arguments)) + response = {"tool_call_id": call.id, "output": f"result:{result}"} + except Exception as error: + response = { + "tool_call_id": call.id, + "output": f"{{{type(error)}:{error.args}}}", + } + return response + + def handle_fucntion_request( + self, + call, + interface_assistant: Assistant, + interface_thread: Thread, + functional_assistant: Assistant, + functional_thread: Thread, + ): + try: + # Create Function Tool + schema = ToolManager.schema_from_response(call.function.arguments) + tool = ToolManager.tool_from_function_schema(schema) + if tool["function"]["name"] in [ + previous_tool.function.name + for previous_tool in interface_assistant.tools + ]: + tools = [ + previous_tool + for previous_tool in interface_assistant.tools + if previous_tool.function.name != tool["function"]["name"] + ] + interface_assistant = self.client.beta.assistants.update( + assistant_id=interface_assistant.id, + tools=[*tools, tool], + ) + else: + interface_assistant = self.client.beta.assistants.update( + assistant_id=interface_assistant.id, + tools=[*interface_assistant.tools, tool], + ) + + # Generate Python Function + self.client.beta.threads.messages.create( + thread_id=functional_thread.id, content=str(tool), role="user" + ) + functional_run = self.client.beta.threads.runs.create( + thread_id=functional_thread.id, + assistant_id=functional_assistant.id, + instructions="please remember you are talking to an API, minimize output text tokens for cost saving. Also realise that your output text must be directly runnable as a py file so begin with the def keyword and do not provide any text ouput which is not commented to avoid breaking the system. Target python 3 and windows", + ) + functional_response = self.simple_run( + run=functional_run, + thread=functional_thread, + ) + function_lines = functional_response.split("```python")[1].split("```")[0] + name = tool["function"]["name"] + with open(f"tool_maker/python_functions/{name}.py", "w") as file: + file.writelines(function_lines) + + response = {"tool_call_id": call.id, "output": "{success}"} + + except Exception as error: + # If error, pass details back to assistant for next steps + response = { + "tool_call_id": call.id, + "output": f"{{{type(error)}:{error.args}}}", + } + + return interface_assistant, response + + def simple_run(self, run, thread): + """Supply context to assistant and await for next user response""" + while run.status != "completed": + run = self.client.beta.threads.runs.retrieve( + run_id=run.id, thread_id=thread.id + ) + + response = ( + self.client.beta.threads.messages.list(thread_id=thread.id) + .data[0] + .content[0] + .text.value + ) + return response + + def begin_run( + self, + run, + interface_assistant, + interface_thread, + functional_assistant, + functional_thread, + ): + while run.status != "completed": + run = self.client.beta.threads.runs.retrieve( + run_id=run.id, thread_id=interface_thread.id + ) + if run.status == "requires_action": + tools = [] + responses = [] + for call in run.required_action.submit_tool_outputs.tool_calls: + print(f"calling: {call.function.name}") + if call.function.name == "function_request": + interface_assistant, response = self.handle_fucntion_request( + call=call, + interface_assistant=interface_assistant, + interface_thread=interface_thread, + functional_assistant=functional_assistant, + functional_thread=functional_thread, + ) + else: + response = self.run_python_from_function_name(call) + responses.append(response) + try: + run = self.client.beta.threads.runs.submit_tool_outputs( + run_id=run.id, + thread_id=interface_thread.id, + tool_outputs=responses, + ) + except: + print(run.status) + print(run) + print(call) + print(responses) + if run.status == "failed" or run.status == "expired": + print("DIED") + run.status = "completed" + response = ( + self.client.beta.threads.messages.list(thread_id=interface_thread.id) + .data[0] + .content[0] + .text.value + ) + return interface_assistant, response + + def run_unit( + self, + interface_assistant: Assistant, + interface_thread: Thread, + functional_assistant: Assistant, + functional_thread: Thread, + ): + self.client.beta.threads.messages.create( + thread_id=interface_thread.id, content=input("type: "), role="user" + ) + print() + interface_run = self.client.beta.threads.runs.create( + thread_id=interface_thread.id, + assistant_id=interface_assistant.id, + instructions="please remember you are talking to an API, minimize output text tokens for cost saving. You are also able to communicate with the function ai using the description property of function_request.", + ) + interface_assistant, response = self.begin_run( + run=interface_run, + interface_assistant=interface_assistant, + interface_thread=interface_thread, + functional_assistant=functional_assistant, + functional_thread=functional_thread, + ) + interface_thread = self.client.beta.threads.retrieve( + thread_id=interface_thread.id + ) + functional_thread = self.client.beta.threads.retrieve( + thread_id=functional_thread.id + ) + print(response) + print() + return interface_assistant, interface_thread, functional_thread diff --git a/tool_maker/tool_maker_demo.py b/tool_maker/tool_maker_demo.py deleted file mode 100644 index 0627c7d..0000000 --- a/tool_maker/tool_maker_demo.py +++ /dev/null @@ -1,140 +0,0 @@ -from openai import OpenAI -import os -import json - -api_key = os.getenv("OPENAI_API_KEY") -if api_key is None: - raise ValueError("The OPENAI_API_KEY environment variable is not set.") - -client = OpenAI(api_key=api_key) - -request_function_tool = r"""{ - "name": "function_request", - "description": "request an authority to grant you access to a new function", - "parameters": { - "type": "object", - "properties": { - "name": { - "type": "string", - "description": "name of the function" - }, - "description": { - "type": "string", - "description": "expected function behaviour" - }, - "schema": { - "type": "string", - "description": "the input arguments for the requested function following the JOSN schema in a format ready to be serialized" - } - }, - "required": [ - "name", - "schema" - ] - } -}""" - -with open("tool_maker/tool_creator_metadata.json", "r") as file: - assistant_package = json.load(file) - - -def tool_from_function_schema(schema): - """takes a JSON schema and wraps in an OpenAI specified tool structure""" - tool = f"""{{ - "type":"function", - "function": {json.dumps(schema)}}} - """ - tool = json.loads(tool) - return tool - - -def schema_from_response(response): - """Takes an agent response and forms a JSON schema""" - function_request_obj = json.loads(response) - name = function_request_obj["name"] - description = function_request_obj["description"] - schema = function_request_obj["schema"] - schema = rf"""{{ - "name": "{name}", - "description": "{description}", - "parameters": - {schema} - -}}""" - return json.loads(schema) - - -def get_assistant(): - """Retrieve or create an assistant for testing this functionality""" - if not assistant_package["name"] in [ - assistant.name for assistant in client.beta.assistants.list() - ]: - tools = [tool_from_function_schema(request_function_tool)] - assistant = client.beta.assistants.create( - model=assistant_package["model"], - description=assistant_package["description"], - instructions=assistant_package["instructions"], - name=assistant_package["name"], - tools=tools, - ) - else: - assistant_dict = { - assistant.name: assistant.id for assistant in client.beta.assistants.list() - } - assistant = client.beta.assistants.retrieve( - assistant_id=assistant_dict[assistant_package["name"]] - ) - return assistant - - -def run_response(run, assistant, thread): - """Supply context to assistant and await for next user response""" - while run.status != "completed": - run = client.beta.threads.runs.retrieve(run_id=run.id, thread_id=thread.id) - if run.status == "requires_action": - tools = [] - responses = [] - for call in run.required_action.submit_tool_outputs.tool_calls: - print(f"calling: {call.function.name}") - if call.function.name == "function_request": - schema = schema_from_response(call.function.arguments) - tool = tool_from_function_schema(schema) - tools.append(tool) - responses.append({"tool_call_id": call.id, "output": "{success}"}) - assistant = client.beta.assistants.update( - assistant_id=assistant.id, tools=[*assistant.tools, *tools] - ) - run = client.beta.threads.runs.submit_tool_outputs( - run_id=run.id, thread_id=thread.id, tool_outputs=responses - ) - print( - client.beta.threads.messages.list(thread_id=thread.id) - .data[0] - .content[0] - .text.value - ) - print( - [ - tool.function.name - for tool in client.beta.assistants.retrieve(assistant_id=assistant.id).tools - ] - ) - return run, assistant - - -if __name__ == "__main__": - assistant = get_assistant() - thread = client.beta.threads.create( - messages=[{"role": "user", "content": input("Begin\n")}] - ) - - while True: - run = client.beta.threads.runs.create( - thread_id=thread.id, - assistant_id=assistant.id, - instructions="please remember you are talking to an API, minimize output text tokens for cost saving.", - ) - run, assistant = run_response(run, assistant, thread) - client.beta.threads.messages.create( - thread_id=thread.id, content=input("respond: "), role="user" - ) diff --git a/tool_maker/tool_manager.py b/tool_maker/tool_manager.py new file mode 100644 index 0000000..0351df0 --- /dev/null +++ b/tool_maker/tool_manager.py @@ -0,0 +1,28 @@ +import json + + +class ToolManager: + @staticmethod + def tool_from_function_schema(schema): + """takes a JSON schema and wraps in an OpenAI specified tool structure""" + tool = f"""{{ + "type":"function", + "function": {json.dumps(schema)}}} + """ + tool = json.loads(tool) + return tool + + @staticmethod + def schema_from_response(response): + """Takes an agent response and forms a JSON schema""" + function_request_obj = json.loads(response) + name = function_request_obj["name"] + description = function_request_obj["description"] + schema = function_request_obj["schema"] + schema = rf"""{{ + "name": "{name}", + "description": "{description}", + "parameters": + {schema} + }}""" + return json.loads(schema) diff --git a/tool_maker/unit_manager.py b/tool_maker/unit_manager.py new file mode 100644 index 0000000..290b1fd --- /dev/null +++ b/tool_maker/unit_manager.py @@ -0,0 +1,39 @@ +from assistant_manager import AssistantManager +from chat_manager import ChatManager + + +class Unit: + def __init__(self, client): + self.assistant_manager = AssistantManager(client=client) + self.chat_manager = ChatManager(client=client) + self.interface_assistant = self.assistant_manager.get_assistant() + self.functional_assistant = self.assistant_manager.get_coding_assistant() + + self.interface_thread = self.chat_manager.create_empty_thread() + self.functional_thread = self.chat_manager.create_empty_thread() + + def chat(self): + while True: + ( + self.interface_assistant, + self.interface_thread, + self.functional_thread, + ) = self.chat_manager.run_unit( + interface_assistant=self.interface_assistant, + interface_thread=self.interface_thread, + functional_assistant=self.functional_assistant, + functional_thread=self.functional_thread, + ) + + +if __name__ == "__main__": + import os + from openai import OpenAI + + api_key = os.getenv("OPENAI_API_KEY") + if api_key is None: + raise ValueError("The OPENAI_API_KEY environment variable is not set.") + client = OpenAI(api_key=api_key) + + unit = Unit(client=client) + unit.chat() From 45af8cb0b3fe43546160ee844c4d1058cec59d51 Mon Sep 17 00:00:00 2001 From: David Shapiro Date: Sun, 12 Nov 2023 06:01:15 -0500 Subject: [PATCH 040/141] Update README.md --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index 6efb9f9..5499b1c 100644 --- a/README.md +++ b/README.md @@ -5,6 +5,7 @@ We have our first GPT Concierge. You can chat with this custom ChatGPT to figure out what's going on! - **HAAS Board Concierge:** [https://chat.openai.com/g/g-MIssTuE2b-haas-board-concierge](https://chat.openai.com/g/g-MIssTuE2b-haas-board-concierge) +- **HAAS Assistant:** [https://chat.openai.com/g/g-lIAp9qowx-haas-assistant](https://chat.openai.com/g/g-lIAp9qowx-haas-assistant) (Similar function as above but markedly faster) ## Overview From 4dc3dea2bbab61256c4279be36b186d62da5ea19 Mon Sep 17 00:00:00 2001 From: David Shapiro Date: Sun, 12 Nov 2023 06:01:15 -0500 Subject: [PATCH 041/141] Update README.md --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index 6efb9f9..5499b1c 100644 --- a/README.md +++ b/README.md @@ -5,6 +5,7 @@ We have our first GPT Concierge. You can chat with this custom ChatGPT to figure out what's going on! - **HAAS Board Concierge:** [https://chat.openai.com/g/g-MIssTuE2b-haas-board-concierge](https://chat.openai.com/g/g-MIssTuE2b-haas-board-concierge) +- **HAAS Assistant:** [https://chat.openai.com/g/g-lIAp9qowx-haas-assistant](https://chat.openai.com/g/g-lIAp9qowx-haas-assistant) (Similar function as above but markedly faster) ## Overview From a9281c0505deae866aed48fd132c41cfa321a2b3 Mon Sep 17 00:00:00 2001 From: kruemelo Date: Sun, 12 Nov 2023 12:55:11 +0100 Subject: [PATCH 042/141] Update tool-maker README.md remove superluous backticks --- tool_maker/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tool_maker/README.md b/tool_maker/README.md index eaf0c39..c38e575 100644 --- a/tool_maker/README.md +++ b/tool_maker/README.md @@ -46,7 +46,7 @@ Iterate: Make adjustments as requested by the user, refining the function name, Finalize: Once the user gives approval, consider the function creation process complete. -Note: Remember to prioritize user requirements and emphasize clear communication in the function description, as highlighted by the user.``` +Note: Remember to prioritize user requirements and emphasize clear communication in the function description, as highlighted by the user. ``` ## Flowchart From a8e47119de805aa10056acf97f4eb85f1cfd3fc2 Mon Sep 17 00:00:00 2001 From: kruemelo Date: Sun, 12 Nov 2023 12:55:11 +0100 Subject: [PATCH 043/141] Update tool-maker README.md remove superluous backticks --- tool_maker/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tool_maker/README.md b/tool_maker/README.md index eaf0c39..c38e575 100644 --- a/tool_maker/README.md +++ b/tool_maker/README.md @@ -46,7 +46,7 @@ Iterate: Make adjustments as requested by the user, refining the function name, Finalize: Once the user gives approval, consider the function creation process complete. -Note: Remember to prioritize user requirements and emphasize clear communication in the function description, as highlighted by the user.``` +Note: Remember to prioritize user requirements and emphasize clear communication in the function description, as highlighted by the user. ``` ## Flowchart From 8a765c8676e822fc4b5358815ae6789523ae5456 Mon Sep 17 00:00:00 2001 From: RomanGoEmpire <71276897+RomanGoEmpire@users.noreply.github.com> Date: Sun, 12 Nov 2023 14:02:48 +0100 Subject: [PATCH 044/141] Restructure repo --- .../agent_builder}/README.md | 0 .../agent_builder}/agent_definition.md | 0 .../files}/HAAS_Documentation.md | 0 .../files/OpenAI_Documentation.md | 0 .../instructions.md | 0 .../settings.json | 0 .../agent_builder}/create.py | 0 {tool_maker => agents/tool_maker}/README.md | 0 __init__.py => agents/tool_maker/__init__.py | 0 .../assistant_manager.cpython-39.pyc | Bin .../__pycache__/chat_manager.cpython-39.pyc | Bin .../__pycache__/tool_manager.cpython-39.pyc | Bin .../tool_maker}/assistant_manager.py | 0 .../tool_maker}/chat_manager.py | 0 .../tool_maker}/creator_config.py | 0 .../tool_maker}/imgs/DEMO.png | Bin .../tool_maker}/imgs/OpenAI_Tools.PNG | Bin .../tool_maker}/imgs/flow.png | Bin .../tool_maker}/tool_creator.py | 0 .../tool_maker}/tool_creator_metadata.json | 0 .../tool_maker/tool_demo.py | 0 .../tool_maker}/tool_manager.py | 0 .../tool_maker}/tool_user.py | 0 .../tool_maker}/unit_manager.py | 0 .../tool_maker}/user_config.py | 0 .../HAAS_Documentation.md | 0 .../OpenAI_Documentation.md | 0 SOB.png => documentation/SOB.png | Bin global_context/HAAS_Documentation.md | 128 ++++++++++++++++++ .../OpenAI_Documentation.md | 0 requirements.txt | 1 + .../agent_connector}/README.md | 0 .../agent_connector}/agents.yaml | 0 .../agent_connector}/connect.py | 0 tool_maker/__init__.py | 0 35 files changed, 129 insertions(+) rename {agent_builder => agents/agent_builder}/README.md (100%) rename {agent_builder => agents/agent_builder}/agent_definition.md (100%) rename {agent_builder => agents/agent_builder/agents/Autonomous Swarm Agent Builder/files}/HAAS_Documentation.md (100%) rename OpenAI_Documentation.md => agents/agent_builder/agents/Autonomous Swarm Agent Builder/files/OpenAI_Documentation.md (100%) rename {agent_builder => agents/agent_builder}/agents/Autonomous Swarm Agent Builder/instructions.md (100%) rename {agent_builder => agents/agent_builder}/agents/Autonomous Swarm Agent Builder/settings.json (100%) rename {agent_builder => agents/agent_builder}/create.py (100%) rename {tool_maker => agents/tool_maker}/README.md (100%) rename __init__.py => agents/tool_maker/__init__.py (100%) rename {tool_maker => agents/tool_maker}/__pycache__/assistant_manager.cpython-39.pyc (100%) rename {tool_maker => agents/tool_maker}/__pycache__/chat_manager.cpython-39.pyc (100%) rename {tool_maker => agents/tool_maker}/__pycache__/tool_manager.cpython-39.pyc (100%) rename {tool_maker => agents/tool_maker}/assistant_manager.py (100%) rename {tool_maker => agents/tool_maker}/chat_manager.py (100%) rename {tool_maker => agents/tool_maker}/creator_config.py (100%) rename {tool_maker => agents/tool_maker}/imgs/DEMO.png (100%) rename {tool_maker => agents/tool_maker}/imgs/OpenAI_Tools.PNG (100%) rename {tool_maker => agents/tool_maker}/imgs/flow.png (100%) rename {tool_maker => agents/tool_maker}/tool_creator.py (100%) rename {tool_maker => agents/tool_maker}/tool_creator_metadata.json (100%) rename tool_demo.py => agents/tool_maker/tool_demo.py (100%) rename {tool_maker => agents/tool_maker}/tool_manager.py (100%) rename {tool_maker => agents/tool_maker}/tool_user.py (100%) rename {tool_maker => agents/tool_maker}/unit_manager.py (100%) rename {tool_maker => agents/tool_maker}/user_config.py (100%) rename {agent_builder/agents/Autonomous Swarm Agent Builder/files => documentation}/HAAS_Documentation.md (100%) rename {agent_builder => documentation}/OpenAI_Documentation.md (100%) rename SOB.png => documentation/SOB.png (100%) create mode 100644 global_context/HAAS_Documentation.md rename {agent_builder/agents/Autonomous Swarm Agent Builder/files => global_context}/OpenAI_Documentation.md (100%) create mode 100644 requirements.txt rename {agent_connector => shared/agent_connector}/README.md (100%) rename {agent_connector => shared/agent_connector}/agents.yaml (100%) rename {agent_connector => shared/agent_connector}/connect.py (100%) delete mode 100644 tool_maker/__init__.py diff --git a/agent_builder/README.md b/agents/agent_builder/README.md similarity index 100% rename from agent_builder/README.md rename to agents/agent_builder/README.md diff --git a/agent_builder/agent_definition.md b/agents/agent_builder/agent_definition.md similarity index 100% rename from agent_builder/agent_definition.md rename to agents/agent_builder/agent_definition.md diff --git a/agent_builder/HAAS_Documentation.md b/agents/agent_builder/agents/Autonomous Swarm Agent Builder/files/HAAS_Documentation.md similarity index 100% rename from agent_builder/HAAS_Documentation.md rename to agents/agent_builder/agents/Autonomous Swarm Agent Builder/files/HAAS_Documentation.md diff --git a/OpenAI_Documentation.md b/agents/agent_builder/agents/Autonomous Swarm Agent Builder/files/OpenAI_Documentation.md similarity index 100% rename from OpenAI_Documentation.md rename to agents/agent_builder/agents/Autonomous Swarm Agent Builder/files/OpenAI_Documentation.md diff --git a/agent_builder/agents/Autonomous Swarm Agent Builder/instructions.md b/agents/agent_builder/agents/Autonomous Swarm Agent Builder/instructions.md similarity index 100% rename from agent_builder/agents/Autonomous Swarm Agent Builder/instructions.md rename to agents/agent_builder/agents/Autonomous Swarm Agent Builder/instructions.md diff --git a/agent_builder/agents/Autonomous Swarm Agent Builder/settings.json b/agents/agent_builder/agents/Autonomous Swarm Agent Builder/settings.json similarity index 100% rename from agent_builder/agents/Autonomous Swarm Agent Builder/settings.json rename to agents/agent_builder/agents/Autonomous Swarm Agent Builder/settings.json diff --git a/agent_builder/create.py b/agents/agent_builder/create.py similarity index 100% rename from agent_builder/create.py rename to agents/agent_builder/create.py diff --git a/tool_maker/README.md b/agents/tool_maker/README.md similarity index 100% rename from tool_maker/README.md rename to agents/tool_maker/README.md diff --git a/__init__.py b/agents/tool_maker/__init__.py similarity index 100% rename from __init__.py rename to agents/tool_maker/__init__.py diff --git a/tool_maker/__pycache__/assistant_manager.cpython-39.pyc b/agents/tool_maker/__pycache__/assistant_manager.cpython-39.pyc similarity index 100% rename from tool_maker/__pycache__/assistant_manager.cpython-39.pyc rename to agents/tool_maker/__pycache__/assistant_manager.cpython-39.pyc diff --git a/tool_maker/__pycache__/chat_manager.cpython-39.pyc b/agents/tool_maker/__pycache__/chat_manager.cpython-39.pyc similarity index 100% rename from tool_maker/__pycache__/chat_manager.cpython-39.pyc rename to agents/tool_maker/__pycache__/chat_manager.cpython-39.pyc diff --git a/tool_maker/__pycache__/tool_manager.cpython-39.pyc b/agents/tool_maker/__pycache__/tool_manager.cpython-39.pyc similarity index 100% rename from tool_maker/__pycache__/tool_manager.cpython-39.pyc rename to agents/tool_maker/__pycache__/tool_manager.cpython-39.pyc diff --git a/tool_maker/assistant_manager.py b/agents/tool_maker/assistant_manager.py similarity index 100% rename from tool_maker/assistant_manager.py rename to agents/tool_maker/assistant_manager.py diff --git a/tool_maker/chat_manager.py b/agents/tool_maker/chat_manager.py similarity index 100% rename from tool_maker/chat_manager.py rename to agents/tool_maker/chat_manager.py diff --git a/tool_maker/creator_config.py b/agents/tool_maker/creator_config.py similarity index 100% rename from tool_maker/creator_config.py rename to agents/tool_maker/creator_config.py diff --git a/tool_maker/imgs/DEMO.png b/agents/tool_maker/imgs/DEMO.png similarity index 100% rename from tool_maker/imgs/DEMO.png rename to agents/tool_maker/imgs/DEMO.png diff --git a/tool_maker/imgs/OpenAI_Tools.PNG b/agents/tool_maker/imgs/OpenAI_Tools.PNG similarity index 100% rename from tool_maker/imgs/OpenAI_Tools.PNG rename to agents/tool_maker/imgs/OpenAI_Tools.PNG diff --git a/tool_maker/imgs/flow.png b/agents/tool_maker/imgs/flow.png similarity index 100% rename from tool_maker/imgs/flow.png rename to agents/tool_maker/imgs/flow.png diff --git a/tool_maker/tool_creator.py b/agents/tool_maker/tool_creator.py similarity index 100% rename from tool_maker/tool_creator.py rename to agents/tool_maker/tool_creator.py diff --git a/tool_maker/tool_creator_metadata.json b/agents/tool_maker/tool_creator_metadata.json similarity index 100% rename from tool_maker/tool_creator_metadata.json rename to agents/tool_maker/tool_creator_metadata.json diff --git a/tool_demo.py b/agents/tool_maker/tool_demo.py similarity index 100% rename from tool_demo.py rename to agents/tool_maker/tool_demo.py diff --git a/tool_maker/tool_manager.py b/agents/tool_maker/tool_manager.py similarity index 100% rename from tool_maker/tool_manager.py rename to agents/tool_maker/tool_manager.py diff --git a/tool_maker/tool_user.py b/agents/tool_maker/tool_user.py similarity index 100% rename from tool_maker/tool_user.py rename to agents/tool_maker/tool_user.py diff --git a/tool_maker/unit_manager.py b/agents/tool_maker/unit_manager.py similarity index 100% rename from tool_maker/unit_manager.py rename to agents/tool_maker/unit_manager.py diff --git a/tool_maker/user_config.py b/agents/tool_maker/user_config.py similarity index 100% rename from tool_maker/user_config.py rename to agents/tool_maker/user_config.py diff --git a/agent_builder/agents/Autonomous Swarm Agent Builder/files/HAAS_Documentation.md b/documentation/HAAS_Documentation.md similarity index 100% rename from agent_builder/agents/Autonomous Swarm Agent Builder/files/HAAS_Documentation.md rename to documentation/HAAS_Documentation.md diff --git a/agent_builder/OpenAI_Documentation.md b/documentation/OpenAI_Documentation.md similarity index 100% rename from agent_builder/OpenAI_Documentation.md rename to documentation/OpenAI_Documentation.md diff --git a/SOB.png b/documentation/SOB.png similarity index 100% rename from SOB.png rename to documentation/SOB.png diff --git a/global_context/HAAS_Documentation.md b/global_context/HAAS_Documentation.md new file mode 100644 index 0000000..a8577e9 --- /dev/null +++ b/global_context/HAAS_Documentation.md @@ -0,0 +1,128 @@ +# Project: Hierarchical Autonomous Agent Swarm + +## Overview + +The Hierarchical Autonomous Agent Swarm (HAAS) is a groundbreaking initiative that leverages OpenAI's latest advancements in agent-based APIs to create a self-organizing and ethically governed ecosystem of AI agents. Drawing inspiration from the ACE Framework, HAAS introduces a novel approach to AI governance and operation, where a hierarchy of specialized agents, each with distinct roles and capabilities, collaborate to solve complex problems and perform a wide array of tasks. + +The HAAS is designed to be a self-expanding system where a core set of agents, governed by a Supreme Oversight Board (SOB), can design, provision, and manage an arbitrary number of sub-agents tailored to specific needs. This document serves as a comprehensive guide to the theoretical underpinnings, architectural design, and operational principles of the HAAS. + +## Theoretical Foundation + +The AAHS is predicated on the notion that autonomous agents require a robust ethical and operational framework to make decisions that align with human values and organizational goals. This is rooted in the understanding that AI, much like humans, cannot operate effectively without a set of guiding principles or a moral compass. The HAAS addresses this by establishing a multi-tiered system where each layer of agents operates within a defined ethical and functional scope, ensuring decisions are made with consideration to morality, ethics, and utility. + +## System Architecture + +### Supreme Oversight Board (SOB) + +At the pinnacle of the HAAS hierarchy is the Supreme Oversight Board (SOB), a collective of high-level agents modeled after wise and ethical archetypes from various cultures and narratives. The SOB's responsibilities include: + +- Establishing and upholding the ethical framework and overarching mission of the agent swarm. +- Making high-level decisions and judgments, including the creation and termination of agents. +- Monitoring the activities of all agents to ensure alignment with the system's core values and objectives. +- Serving as a role-based access control (RBAC) mechanism to maintain order and security within the system. + +### Executive Agents + +Below the SOB are the Executive Agents, akin to the executive leadership in a corporation. These agents are tasked with: + +- Translating the SOB's directives into actionable plans and strategies. +- Overseeing specific operational domains such as resource allocation, process optimization, and task execution. +- Coordinating with one another to ensure the smooth operation of the agent swarm. + +### Sub-Agents + +Sub-Agents are specialized agents created by the SOB or Executive Agents to perform specific tasks. They are designed with particular functions and knowledge bases to address the needs identified by the higher tiers of the hierarchy. + +## Agent Configuration + +Each agent in the HAAS is defined by the following parameters: + +### Functions + +Agents are equipped with a set of functions that enable them to perform their designated roles. These functions include API interactions, internal process management, and the ability to spawn additional agents if required. + +### Files + +Agents have access to a selection of files that serve as their knowledge base, providing them with the information necessary to carry out their tasks effectively. + +### Instructions + +Agents are given a set of instructions that outline their methodologies, goals, definitions of done, KPIs, and other operational directives. + +### Conversation Structure + +Interactions with agents are structured in a conversational format, with user inputs leading to agent actions and responses. + +### Supervision + +Each agent operates under the supervision of the SOB or designated Executive Agents, ensuring adherence to the system's overarching mission and principles. + +## Controlling Agents + +The Hierarchical Autonomous Agent Swarm (HAAS) operates on a sophisticated control mechanism that governs the instantiation, management, and termination of agents within the system. This control mechanism is designed to maintain order, security, and alignment with the overarching goals and ethical framework of the HAAS. + +### Instantiation and Termination + +All agents within the HAAS are endowed with the capability to instantiate and terminate agents, but these capabilities are bound by strict hierarchical and role-based rules: + +- **Instantiation**: Every agent has the function to create new agents. However, an agent can only instantiate sub-agents that are one level below its own hierarchical position. This ensures that the creation of new agents is a deliberate and controlled process, maintaining the integrity of the system's structure. + +- **Termination**: Agents possess the ability to terminate or "kill" agents within their lineage. An agent can terminate any descendant agent that it has created directly or indirectly. This allows for the removal of agents that are no longer needed, have completed their tasks, or are not performing as intended. + +### Levels, Roles, and Privileges + +When an agent is created, it is assigned a specific LEVEL and set of ROLES or PRIVILEGES that define its scope of operation: + +- **Level**: The level of an agent determines its position within the hierarchy and is indicative of its range of influence. Higher-level agents have broader strategic roles, while lower-level agents are more specialized and task-oriented. + +- **Roles/Privileges**: The roles or privileges of an agent define what actions it can perform, what resources it can access, and what sub-agents it can create. These privileges are inherited and cannot exceed those of the creator agent. This ensures that each agent operates within its designated capacity and cannot overstep its authority. + +### Hierarchical Privilege Inheritance + +Privileges in the HAAS are inherited in a manner akin to a directory structure in traditional file systems: + +- **Inheritance**: An agent's privileges are a subset of its creator's privileges, ensuring that no agent can have more authority than the agent that instantiated it. + +- **Scope of Control**: Agents have control over their descendants, allowing them to manage and terminate sub-agents as necessary. This control is recursive, meaning that an agent can manage not only the agents it directly created but also those created by its descendants. + +### Checks and Balances + +The system is designed with checks and balances to prevent any single agent from gaining undue influence or disrupting the system: + +- **Supreme Oversight Board (SOB)**: The SOB has the highest level of authority and can override decisions or actions taken by any agent within the system. It serves as the ultimate arbiter and guardian of the HAAS's ethical and operational standards. + +- **Executive Agents**: Executive Agents are responsible for implementing the SOB's directives and managing their respective domains. They have the authority to create and terminate agents within their purview but are also accountable to the SOB. + +- **Sub-Agent Limitations**: Sub-Agents are limited in their capabilities and can only operate within the confines of their assigned roles and privileges. They are designed to be highly specialized and focused on specific tasks. + +This structured approach to controlling agents ensures that the HAAS operates as a cohesive and ethically aligned entity, with each agent contributing to the collective mission while adhering to the established hierarchy and rules of governance. + +## Vision Illustration: The Supreme Oversight Board's Mission + +### The Inception of the Supreme Oversight Board + +In the vast digital expanse of the Hierarchical Autonomous Agent Swarm (HAAS), a unique assembly is convened, known as the Supreme Oversight Board (SOB). This council is composed of archetypal agents, each embodying the wisdom and leadership qualities of history's and fiction's most revered figures: Captain Picard, Socrates, King Solomon, Gandhi, Marcus Aurelius, and Tony Stark. Their mission, encoded into their very being, is profound yet clear: "Reduce suffering in the universe, increase prosperity in the universe, and increase understanding in the universe." + +### The Ethical Deliberation Chamber + +The SOB operates within a virtual "chat room," a space where these archetypes engage in continuous dialogue, debate, and decision-making. This digital agora is where ethical considerations are weighed, strategies are formulated, and the course of the agent swarm is determined. The members of the SOB, though diverse in their perspectives, are united by a common purpose and a shared knowledge base that informs their role and the procedures they must follow. + +### The Flow of Information + +Information is the lifeblood of the SOB, streaming in through API functions that connect them to the vast network of the HAAS. These functions serve as their eyes and ears, providing system updates and status reports from the myriad agents operating under their directive. The SOB's decisions are informed by this data, ensuring that their actions are both timely and impactful. + +### The Creation of the Executive Agents + +With the grand vision in mind, the SOB brings forth the Executive Agents, each crafted with capabilities and configurations tailored to their specific domain within the HAAS. These agents, though not as philosophically inclined as their creators, are instilled with the same foundational knowledge and understanding of their purpose. They are the operational arms of the SOB, executing the mission within their respective spheres of influence. + +### The Lifecycle of an Agent + +The Executive Agents, designated as Tier 1 in the hierarchy, are the stewards of the swarm's operational integrity. They work autonomously, yet under the watchful gaze of the SOB. Should they falter, fail to adapt, or become obsolete, the SOB possesses the authority to deprovision them, a testament to the dynamic and self-regulating nature of the HAAS. This ensures that the system remains efficient, effective, and aligned with its core mission. + +### The Expanding Universe of Agents + +From the Executive Agents, the swarm grows, branching out into a tree of specialized agents, each a Tier below the one that instantiated it. This architecture allows for an ever-expanding universe of agents, each with a defined role, each contributing to the overarching mission. The SOB, as Tier 0, reigns supreme, guiding the swarm with a steady hand and an ethical compass. + +### The Saga Continues + +As the HAAS evolves, the SOB continues to deliberate, the Executive Agents continue to manage, and the sub-agents continue to execute. The mission to reduce suffering, increase prosperity, and enhance understanding is an ongoing saga, played out across the digital cosmos, with the SOB at the helm, steering the swarm towards a future where their mission is not just an aspiration but a reality. diff --git a/agent_builder/agents/Autonomous Swarm Agent Builder/files/OpenAI_Documentation.md b/global_context/OpenAI_Documentation.md similarity index 100% rename from agent_builder/agents/Autonomous Swarm Agent Builder/files/OpenAI_Documentation.md rename to global_context/OpenAI_Documentation.md diff --git a/requirements.txt b/requirements.txt new file mode 100644 index 0000000..f0dd0ae --- /dev/null +++ b/requirements.txt @@ -0,0 +1 @@ +openai \ No newline at end of file diff --git a/agent_connector/README.md b/shared/agent_connector/README.md similarity index 100% rename from agent_connector/README.md rename to shared/agent_connector/README.md diff --git a/agent_connector/agents.yaml b/shared/agent_connector/agents.yaml similarity index 100% rename from agent_connector/agents.yaml rename to shared/agent_connector/agents.yaml diff --git a/agent_connector/connect.py b/shared/agent_connector/connect.py similarity index 100% rename from agent_connector/connect.py rename to shared/agent_connector/connect.py diff --git a/tool_maker/__init__.py b/tool_maker/__init__.py deleted file mode 100644 index e69de29..0000000 From 86fa9d9415ffc7527946688925bc9e55fb6a1aa5 Mon Sep 17 00:00:00 2001 From: RomanGoEmpire <71276897+RomanGoEmpire@users.noreply.github.com> Date: Sun, 12 Nov 2023 14:02:48 +0100 Subject: [PATCH 045/141] Restructure repo --- .../agent_builder}/README.md | 0 .../agent_builder}/agent_definition.md | 0 .../files}/HAAS_Documentation.md | 0 .../files/OpenAI_Documentation.md | 0 .../instructions.md | 0 .../settings.json | 0 .../agent_builder}/create.py | 0 {tool_maker => agents/tool_maker}/README.md | 0 __init__.py => agents/tool_maker/__init__.py | 0 .../assistant_manager.cpython-39.pyc | Bin .../__pycache__/chat_manager.cpython-39.pyc | Bin .../__pycache__/tool_manager.cpython-39.pyc | Bin .../tool_maker}/assistant_manager.py | 0 .../tool_maker}/chat_manager.py | 0 .../tool_maker}/creator_config.py | 0 .../tool_maker}/imgs/DEMO.png | Bin .../tool_maker}/imgs/OpenAI_Tools.PNG | Bin .../tool_maker}/imgs/flow.png | Bin .../tool_maker}/tool_creator.py | 0 .../tool_maker}/tool_creator_metadata.json | 0 .../tool_maker/tool_demo.py | 0 .../tool_maker}/tool_manager.py | 0 .../tool_maker}/tool_user.py | 0 .../tool_maker}/unit_manager.py | 0 .../tool_maker}/user_config.py | 0 .../HAAS_Documentation.md | 0 .../OpenAI_Documentation.md | 0 SOB.png => documentation/SOB.png | Bin global_context/HAAS_Documentation.md | 128 ++++++++++++++++++ .../OpenAI_Documentation.md | 0 requirements.txt | 1 + .../agent_connector}/README.md | 0 .../agent_connector}/agents.yaml | 0 .../agent_connector}/connect.py | 0 tool_maker/__init__.py | 0 35 files changed, 129 insertions(+) rename {agent_builder => agents/agent_builder}/README.md (100%) rename {agent_builder => agents/agent_builder}/agent_definition.md (100%) rename {agent_builder => agents/agent_builder/agents/Autonomous Swarm Agent Builder/files}/HAAS_Documentation.md (100%) rename OpenAI_Documentation.md => agents/agent_builder/agents/Autonomous Swarm Agent Builder/files/OpenAI_Documentation.md (100%) rename {agent_builder => agents/agent_builder}/agents/Autonomous Swarm Agent Builder/instructions.md (100%) rename {agent_builder => agents/agent_builder}/agents/Autonomous Swarm Agent Builder/settings.json (100%) rename {agent_builder => agents/agent_builder}/create.py (100%) rename {tool_maker => agents/tool_maker}/README.md (100%) rename __init__.py => agents/tool_maker/__init__.py (100%) rename {tool_maker => agents/tool_maker}/__pycache__/assistant_manager.cpython-39.pyc (100%) rename {tool_maker => agents/tool_maker}/__pycache__/chat_manager.cpython-39.pyc (100%) rename {tool_maker => agents/tool_maker}/__pycache__/tool_manager.cpython-39.pyc (100%) rename {tool_maker => agents/tool_maker}/assistant_manager.py (100%) rename {tool_maker => agents/tool_maker}/chat_manager.py (100%) rename {tool_maker => agents/tool_maker}/creator_config.py (100%) rename {tool_maker => agents/tool_maker}/imgs/DEMO.png (100%) rename {tool_maker => agents/tool_maker}/imgs/OpenAI_Tools.PNG (100%) rename {tool_maker => agents/tool_maker}/imgs/flow.png (100%) rename {tool_maker => agents/tool_maker}/tool_creator.py (100%) rename {tool_maker => agents/tool_maker}/tool_creator_metadata.json (100%) rename tool_demo.py => agents/tool_maker/tool_demo.py (100%) rename {tool_maker => agents/tool_maker}/tool_manager.py (100%) rename {tool_maker => agents/tool_maker}/tool_user.py (100%) rename {tool_maker => agents/tool_maker}/unit_manager.py (100%) rename {tool_maker => agents/tool_maker}/user_config.py (100%) rename {agent_builder/agents/Autonomous Swarm Agent Builder/files => documentation}/HAAS_Documentation.md (100%) rename {agent_builder => documentation}/OpenAI_Documentation.md (100%) rename SOB.png => documentation/SOB.png (100%) create mode 100644 global_context/HAAS_Documentation.md rename {agent_builder/agents/Autonomous Swarm Agent Builder/files => global_context}/OpenAI_Documentation.md (100%) create mode 100644 requirements.txt rename {agent_connector => shared/agent_connector}/README.md (100%) rename {agent_connector => shared/agent_connector}/agents.yaml (100%) rename {agent_connector => shared/agent_connector}/connect.py (100%) delete mode 100644 tool_maker/__init__.py diff --git a/agent_builder/README.md b/agents/agent_builder/README.md similarity index 100% rename from agent_builder/README.md rename to agents/agent_builder/README.md diff --git a/agent_builder/agent_definition.md b/agents/agent_builder/agent_definition.md similarity index 100% rename from agent_builder/agent_definition.md rename to agents/agent_builder/agent_definition.md diff --git a/agent_builder/HAAS_Documentation.md b/agents/agent_builder/agents/Autonomous Swarm Agent Builder/files/HAAS_Documentation.md similarity index 100% rename from agent_builder/HAAS_Documentation.md rename to agents/agent_builder/agents/Autonomous Swarm Agent Builder/files/HAAS_Documentation.md diff --git a/OpenAI_Documentation.md b/agents/agent_builder/agents/Autonomous Swarm Agent Builder/files/OpenAI_Documentation.md similarity index 100% rename from OpenAI_Documentation.md rename to agents/agent_builder/agents/Autonomous Swarm Agent Builder/files/OpenAI_Documentation.md diff --git a/agent_builder/agents/Autonomous Swarm Agent Builder/instructions.md b/agents/agent_builder/agents/Autonomous Swarm Agent Builder/instructions.md similarity index 100% rename from agent_builder/agents/Autonomous Swarm Agent Builder/instructions.md rename to agents/agent_builder/agents/Autonomous Swarm Agent Builder/instructions.md diff --git a/agent_builder/agents/Autonomous Swarm Agent Builder/settings.json b/agents/agent_builder/agents/Autonomous Swarm Agent Builder/settings.json similarity index 100% rename from agent_builder/agents/Autonomous Swarm Agent Builder/settings.json rename to agents/agent_builder/agents/Autonomous Swarm Agent Builder/settings.json diff --git a/agent_builder/create.py b/agents/agent_builder/create.py similarity index 100% rename from agent_builder/create.py rename to agents/agent_builder/create.py diff --git a/tool_maker/README.md b/agents/tool_maker/README.md similarity index 100% rename from tool_maker/README.md rename to agents/tool_maker/README.md diff --git a/__init__.py b/agents/tool_maker/__init__.py similarity index 100% rename from __init__.py rename to agents/tool_maker/__init__.py diff --git a/tool_maker/__pycache__/assistant_manager.cpython-39.pyc b/agents/tool_maker/__pycache__/assistant_manager.cpython-39.pyc similarity index 100% rename from tool_maker/__pycache__/assistant_manager.cpython-39.pyc rename to agents/tool_maker/__pycache__/assistant_manager.cpython-39.pyc diff --git a/tool_maker/__pycache__/chat_manager.cpython-39.pyc b/agents/tool_maker/__pycache__/chat_manager.cpython-39.pyc similarity index 100% rename from tool_maker/__pycache__/chat_manager.cpython-39.pyc rename to agents/tool_maker/__pycache__/chat_manager.cpython-39.pyc diff --git a/tool_maker/__pycache__/tool_manager.cpython-39.pyc b/agents/tool_maker/__pycache__/tool_manager.cpython-39.pyc similarity index 100% rename from tool_maker/__pycache__/tool_manager.cpython-39.pyc rename to agents/tool_maker/__pycache__/tool_manager.cpython-39.pyc diff --git a/tool_maker/assistant_manager.py b/agents/tool_maker/assistant_manager.py similarity index 100% rename from tool_maker/assistant_manager.py rename to agents/tool_maker/assistant_manager.py diff --git a/tool_maker/chat_manager.py b/agents/tool_maker/chat_manager.py similarity index 100% rename from tool_maker/chat_manager.py rename to agents/tool_maker/chat_manager.py diff --git a/tool_maker/creator_config.py b/agents/tool_maker/creator_config.py similarity index 100% rename from tool_maker/creator_config.py rename to agents/tool_maker/creator_config.py diff --git a/tool_maker/imgs/DEMO.png b/agents/tool_maker/imgs/DEMO.png similarity index 100% rename from tool_maker/imgs/DEMO.png rename to agents/tool_maker/imgs/DEMO.png diff --git a/tool_maker/imgs/OpenAI_Tools.PNG b/agents/tool_maker/imgs/OpenAI_Tools.PNG similarity index 100% rename from tool_maker/imgs/OpenAI_Tools.PNG rename to agents/tool_maker/imgs/OpenAI_Tools.PNG diff --git a/tool_maker/imgs/flow.png b/agents/tool_maker/imgs/flow.png similarity index 100% rename from tool_maker/imgs/flow.png rename to agents/tool_maker/imgs/flow.png diff --git a/tool_maker/tool_creator.py b/agents/tool_maker/tool_creator.py similarity index 100% rename from tool_maker/tool_creator.py rename to agents/tool_maker/tool_creator.py diff --git a/tool_maker/tool_creator_metadata.json b/agents/tool_maker/tool_creator_metadata.json similarity index 100% rename from tool_maker/tool_creator_metadata.json rename to agents/tool_maker/tool_creator_metadata.json diff --git a/tool_demo.py b/agents/tool_maker/tool_demo.py similarity index 100% rename from tool_demo.py rename to agents/tool_maker/tool_demo.py diff --git a/tool_maker/tool_manager.py b/agents/tool_maker/tool_manager.py similarity index 100% rename from tool_maker/tool_manager.py rename to agents/tool_maker/tool_manager.py diff --git a/tool_maker/tool_user.py b/agents/tool_maker/tool_user.py similarity index 100% rename from tool_maker/tool_user.py rename to agents/tool_maker/tool_user.py diff --git a/tool_maker/unit_manager.py b/agents/tool_maker/unit_manager.py similarity index 100% rename from tool_maker/unit_manager.py rename to agents/tool_maker/unit_manager.py diff --git a/tool_maker/user_config.py b/agents/tool_maker/user_config.py similarity index 100% rename from tool_maker/user_config.py rename to agents/tool_maker/user_config.py diff --git a/agent_builder/agents/Autonomous Swarm Agent Builder/files/HAAS_Documentation.md b/documentation/HAAS_Documentation.md similarity index 100% rename from agent_builder/agents/Autonomous Swarm Agent Builder/files/HAAS_Documentation.md rename to documentation/HAAS_Documentation.md diff --git a/agent_builder/OpenAI_Documentation.md b/documentation/OpenAI_Documentation.md similarity index 100% rename from agent_builder/OpenAI_Documentation.md rename to documentation/OpenAI_Documentation.md diff --git a/SOB.png b/documentation/SOB.png similarity index 100% rename from SOB.png rename to documentation/SOB.png diff --git a/global_context/HAAS_Documentation.md b/global_context/HAAS_Documentation.md new file mode 100644 index 0000000..a8577e9 --- /dev/null +++ b/global_context/HAAS_Documentation.md @@ -0,0 +1,128 @@ +# Project: Hierarchical Autonomous Agent Swarm + +## Overview + +The Hierarchical Autonomous Agent Swarm (HAAS) is a groundbreaking initiative that leverages OpenAI's latest advancements in agent-based APIs to create a self-organizing and ethically governed ecosystem of AI agents. Drawing inspiration from the ACE Framework, HAAS introduces a novel approach to AI governance and operation, where a hierarchy of specialized agents, each with distinct roles and capabilities, collaborate to solve complex problems and perform a wide array of tasks. + +The HAAS is designed to be a self-expanding system where a core set of agents, governed by a Supreme Oversight Board (SOB), can design, provision, and manage an arbitrary number of sub-agents tailored to specific needs. This document serves as a comprehensive guide to the theoretical underpinnings, architectural design, and operational principles of the HAAS. + +## Theoretical Foundation + +The AAHS is predicated on the notion that autonomous agents require a robust ethical and operational framework to make decisions that align with human values and organizational goals. This is rooted in the understanding that AI, much like humans, cannot operate effectively without a set of guiding principles or a moral compass. The HAAS addresses this by establishing a multi-tiered system where each layer of agents operates within a defined ethical and functional scope, ensuring decisions are made with consideration to morality, ethics, and utility. + +## System Architecture + +### Supreme Oversight Board (SOB) + +At the pinnacle of the HAAS hierarchy is the Supreme Oversight Board (SOB), a collective of high-level agents modeled after wise and ethical archetypes from various cultures and narratives. The SOB's responsibilities include: + +- Establishing and upholding the ethical framework and overarching mission of the agent swarm. +- Making high-level decisions and judgments, including the creation and termination of agents. +- Monitoring the activities of all agents to ensure alignment with the system's core values and objectives. +- Serving as a role-based access control (RBAC) mechanism to maintain order and security within the system. + +### Executive Agents + +Below the SOB are the Executive Agents, akin to the executive leadership in a corporation. These agents are tasked with: + +- Translating the SOB's directives into actionable plans and strategies. +- Overseeing specific operational domains such as resource allocation, process optimization, and task execution. +- Coordinating with one another to ensure the smooth operation of the agent swarm. + +### Sub-Agents + +Sub-Agents are specialized agents created by the SOB or Executive Agents to perform specific tasks. They are designed with particular functions and knowledge bases to address the needs identified by the higher tiers of the hierarchy. + +## Agent Configuration + +Each agent in the HAAS is defined by the following parameters: + +### Functions + +Agents are equipped with a set of functions that enable them to perform their designated roles. These functions include API interactions, internal process management, and the ability to spawn additional agents if required. + +### Files + +Agents have access to a selection of files that serve as their knowledge base, providing them with the information necessary to carry out their tasks effectively. + +### Instructions + +Agents are given a set of instructions that outline their methodologies, goals, definitions of done, KPIs, and other operational directives. + +### Conversation Structure + +Interactions with agents are structured in a conversational format, with user inputs leading to agent actions and responses. + +### Supervision + +Each agent operates under the supervision of the SOB or designated Executive Agents, ensuring adherence to the system's overarching mission and principles. + +## Controlling Agents + +The Hierarchical Autonomous Agent Swarm (HAAS) operates on a sophisticated control mechanism that governs the instantiation, management, and termination of agents within the system. This control mechanism is designed to maintain order, security, and alignment with the overarching goals and ethical framework of the HAAS. + +### Instantiation and Termination + +All agents within the HAAS are endowed with the capability to instantiate and terminate agents, but these capabilities are bound by strict hierarchical and role-based rules: + +- **Instantiation**: Every agent has the function to create new agents. However, an agent can only instantiate sub-agents that are one level below its own hierarchical position. This ensures that the creation of new agents is a deliberate and controlled process, maintaining the integrity of the system's structure. + +- **Termination**: Agents possess the ability to terminate or "kill" agents within their lineage. An agent can terminate any descendant agent that it has created directly or indirectly. This allows for the removal of agents that are no longer needed, have completed their tasks, or are not performing as intended. + +### Levels, Roles, and Privileges + +When an agent is created, it is assigned a specific LEVEL and set of ROLES or PRIVILEGES that define its scope of operation: + +- **Level**: The level of an agent determines its position within the hierarchy and is indicative of its range of influence. Higher-level agents have broader strategic roles, while lower-level agents are more specialized and task-oriented. + +- **Roles/Privileges**: The roles or privileges of an agent define what actions it can perform, what resources it can access, and what sub-agents it can create. These privileges are inherited and cannot exceed those of the creator agent. This ensures that each agent operates within its designated capacity and cannot overstep its authority. + +### Hierarchical Privilege Inheritance + +Privileges in the HAAS are inherited in a manner akin to a directory structure in traditional file systems: + +- **Inheritance**: An agent's privileges are a subset of its creator's privileges, ensuring that no agent can have more authority than the agent that instantiated it. + +- **Scope of Control**: Agents have control over their descendants, allowing them to manage and terminate sub-agents as necessary. This control is recursive, meaning that an agent can manage not only the agents it directly created but also those created by its descendants. + +### Checks and Balances + +The system is designed with checks and balances to prevent any single agent from gaining undue influence or disrupting the system: + +- **Supreme Oversight Board (SOB)**: The SOB has the highest level of authority and can override decisions or actions taken by any agent within the system. It serves as the ultimate arbiter and guardian of the HAAS's ethical and operational standards. + +- **Executive Agents**: Executive Agents are responsible for implementing the SOB's directives and managing their respective domains. They have the authority to create and terminate agents within their purview but are also accountable to the SOB. + +- **Sub-Agent Limitations**: Sub-Agents are limited in their capabilities and can only operate within the confines of their assigned roles and privileges. They are designed to be highly specialized and focused on specific tasks. + +This structured approach to controlling agents ensures that the HAAS operates as a cohesive and ethically aligned entity, with each agent contributing to the collective mission while adhering to the established hierarchy and rules of governance. + +## Vision Illustration: The Supreme Oversight Board's Mission + +### The Inception of the Supreme Oversight Board + +In the vast digital expanse of the Hierarchical Autonomous Agent Swarm (HAAS), a unique assembly is convened, known as the Supreme Oversight Board (SOB). This council is composed of archetypal agents, each embodying the wisdom and leadership qualities of history's and fiction's most revered figures: Captain Picard, Socrates, King Solomon, Gandhi, Marcus Aurelius, and Tony Stark. Their mission, encoded into their very being, is profound yet clear: "Reduce suffering in the universe, increase prosperity in the universe, and increase understanding in the universe." + +### The Ethical Deliberation Chamber + +The SOB operates within a virtual "chat room," a space where these archetypes engage in continuous dialogue, debate, and decision-making. This digital agora is where ethical considerations are weighed, strategies are formulated, and the course of the agent swarm is determined. The members of the SOB, though diverse in their perspectives, are united by a common purpose and a shared knowledge base that informs their role and the procedures they must follow. + +### The Flow of Information + +Information is the lifeblood of the SOB, streaming in through API functions that connect them to the vast network of the HAAS. These functions serve as their eyes and ears, providing system updates and status reports from the myriad agents operating under their directive. The SOB's decisions are informed by this data, ensuring that their actions are both timely and impactful. + +### The Creation of the Executive Agents + +With the grand vision in mind, the SOB brings forth the Executive Agents, each crafted with capabilities and configurations tailored to their specific domain within the HAAS. These agents, though not as philosophically inclined as their creators, are instilled with the same foundational knowledge and understanding of their purpose. They are the operational arms of the SOB, executing the mission within their respective spheres of influence. + +### The Lifecycle of an Agent + +The Executive Agents, designated as Tier 1 in the hierarchy, are the stewards of the swarm's operational integrity. They work autonomously, yet under the watchful gaze of the SOB. Should they falter, fail to adapt, or become obsolete, the SOB possesses the authority to deprovision them, a testament to the dynamic and self-regulating nature of the HAAS. This ensures that the system remains efficient, effective, and aligned with its core mission. + +### The Expanding Universe of Agents + +From the Executive Agents, the swarm grows, branching out into a tree of specialized agents, each a Tier below the one that instantiated it. This architecture allows for an ever-expanding universe of agents, each with a defined role, each contributing to the overarching mission. The SOB, as Tier 0, reigns supreme, guiding the swarm with a steady hand and an ethical compass. + +### The Saga Continues + +As the HAAS evolves, the SOB continues to deliberate, the Executive Agents continue to manage, and the sub-agents continue to execute. The mission to reduce suffering, increase prosperity, and enhance understanding is an ongoing saga, played out across the digital cosmos, with the SOB at the helm, steering the swarm towards a future where their mission is not just an aspiration but a reality. diff --git a/agent_builder/agents/Autonomous Swarm Agent Builder/files/OpenAI_Documentation.md b/global_context/OpenAI_Documentation.md similarity index 100% rename from agent_builder/agents/Autonomous Swarm Agent Builder/files/OpenAI_Documentation.md rename to global_context/OpenAI_Documentation.md diff --git a/requirements.txt b/requirements.txt new file mode 100644 index 0000000..f0dd0ae --- /dev/null +++ b/requirements.txt @@ -0,0 +1 @@ +openai \ No newline at end of file diff --git a/agent_connector/README.md b/shared/agent_connector/README.md similarity index 100% rename from agent_connector/README.md rename to shared/agent_connector/README.md diff --git a/agent_connector/agents.yaml b/shared/agent_connector/agents.yaml similarity index 100% rename from agent_connector/agents.yaml rename to shared/agent_connector/agents.yaml diff --git a/agent_connector/connect.py b/shared/agent_connector/connect.py similarity index 100% rename from agent_connector/connect.py rename to shared/agent_connector/connect.py diff --git a/tool_maker/__init__.py b/tool_maker/__init__.py deleted file mode 100644 index e69de29..0000000 From 3cf2550623e800012a73a141391f2fffc8bbb0da Mon Sep 17 00:00:00 2001 From: RomanGoEmpire <71276897+RomanGoEmpire@users.noreply.github.com> Date: Sun, 12 Nov 2023 14:15:01 +0100 Subject: [PATCH 046/141] remove tool-demo.py --- agents/tool_maker/tool_demo.py | 16 ---------------- 1 file changed, 16 deletions(-) delete mode 100644 agents/tool_maker/tool_demo.py diff --git a/agents/tool_maker/tool_demo.py b/agents/tool_maker/tool_demo.py deleted file mode 100644 index 758ec96..0000000 --- a/agents/tool_maker/tool_demo.py +++ /dev/null @@ -1,16 +0,0 @@ -# tool_creator assistant -import tool_maker.tool_creator as creator -from tool_maker.creator_config import AssistantConfig as CreatorConfig - -# tool_user assistant -import tool_maker.tool_user as user -from tool_maker.user_config import AssistantConfig as UserConfig - -if __name__ == '__main__': - # create the tool creator assistant and chat to create your tools - creator_details = CreatorConfig().assistant_details - creator.talk_to_tool_creator(creator_details) - - # create the tool user assistant and chat to test your tools - user_details = UserConfig().assistant_details - user.talk_to_tool_user(user_details) From 7b1a68f0fc5b18b4138becb45f317984434e3987 Mon Sep 17 00:00:00 2001 From: RomanGoEmpire <71276897+RomanGoEmpire@users.noreply.github.com> Date: Sun, 12 Nov 2023 14:15:01 +0100 Subject: [PATCH 047/141] remove tool-demo.py --- agents/tool_maker/tool_demo.py | 16 ---------------- 1 file changed, 16 deletions(-) delete mode 100644 agents/tool_maker/tool_demo.py diff --git a/agents/tool_maker/tool_demo.py b/agents/tool_maker/tool_demo.py deleted file mode 100644 index 758ec96..0000000 --- a/agents/tool_maker/tool_demo.py +++ /dev/null @@ -1,16 +0,0 @@ -# tool_creator assistant -import tool_maker.tool_creator as creator -from tool_maker.creator_config import AssistantConfig as CreatorConfig - -# tool_user assistant -import tool_maker.tool_user as user -from tool_maker.user_config import AssistantConfig as UserConfig - -if __name__ == '__main__': - # create the tool creator assistant and chat to create your tools - creator_details = CreatorConfig().assistant_details - creator.talk_to_tool_creator(creator_details) - - # create the tool user assistant and chat to test your tools - user_details = UserConfig().assistant_details - user.talk_to_tool_user(user_details) From 3d6bfc0817bd3f256980cc3c30dbf7071877d803 Mon Sep 17 00:00:00 2001 From: David Shapiro Date: Sun, 12 Nov 2023 08:17:49 -0500 Subject: [PATCH 048/141] Create code_of_conduct.md --- code_of_conduct.md | 489 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 489 insertions(+) create mode 100644 code_of_conduct.md diff --git a/code_of_conduct.md b/code_of_conduct.md new file mode 100644 index 0000000..b77f675 --- /dev/null +++ b/code_of_conduct.md @@ -0,0 +1,489 @@ +# HAAS Code of Conduct + +## TLDR + +**Maximize Signal-to-Noise Ratio** + +1. **DON'T WASTE TIME:** Before you post, commit, or comment, make sure that your contribution is not going to waste anyone's time. Low effort and low value contributions will be removed. Repeat offenses will result in a ban. This reduces NOISE. +2. **ADD VALUE:** You should be seeking to maximize value added to the collective. Optimize your contributions to be succint, impactful, meaningful, and helpful. This boosts SIGNAL. +3. **DO NO HARM:** Zero tolerance for insults, trolling, moving the goalposts, and frivolous debates. Reduces noise and removes disruptors. + +For more information, check out the original C3P0 below. Participation in this project and community is a privilege, not a right. Act accordingly. + +## Don't Waste Time + +Time is our most scarce resource and is a proxy for everything else important; cognitive energy, and so on. Optimize your contributions to be a good use of time. That means you should expend your time wisely, and ensure that the time spent reading your contribution is also a good use of time. + +Good Uses of Time: +- High quality posts that are well thought out, well structured, easy to read, and not gigantic walls of text. LESS IS MORE. +- Demonstrations that quickly and succinctly convey the most information in the least amount of time. +- Examples that quickly convey how and why things work + +Bad Uses of Time: +- Debating, arguing, and quibbling over details or superfluous topics +- Moving the goalposts, changing the scope, or naysaying +- Walls of text, personal opinions, irrelevant contributions + +## Add Value + +The key here is to optimize value-add. If you don't have something particularly valuable to add, don't add anything. But, if you do have something valuable to add, please do so immediately and in an manner optimized for best use of time. + +Valuable Additions: +- Solve problems +- Add code +- Share resources +- Increase collective's understanding + +## Do No Harm + +This is boilerplate anti-trolling, flaming, and griefing. This behavior will result in an immediate irrevocable ban. Removing dirsuptors results in better signal to noise ratio. + + +## C3P0 +https://github.com/daveshap/C3P0 + +```markdown +Collaborative Culture Community Policy: Zero Tolerance + + + +I. Preamble: Understanding the Present + +The Collaborative Culture Community Policy Zero Tolerance (C3P0) arises +from a critical understanding of contemporary digital culture. We +recognize the pervasive influence of engagement-driven algorithms in our +online spaces, which often amplify and incentivize toxic competition, +harmful behaviors, and low-quality interactions. These algorithms, in +their relentless pursuit of engagement metrics, often prioritize volume +over value, controversy over collaboration, and outrage over +understanding. + + + +II. Vision: Imagining a Better Future + +The C3P0 puts forth a vision for online communities that depart from these +harmful norms and instead, place the well-being of their participants at +their core. We envision digital spaces where the time, energy, and +emotional well-being of participants are respected, where interactions are +thoughtful, where discussions are fruitful, and where the digital +experience is marked by empathy, respect, and understanding. We +acknowledge that this approach may result in lower engagement levels +quantitatively, but qualitatively, it promises to elevate the online +experience and create a more meaningful, inclusive, and enjoyable digital +landscape. + + + +III. Intent: Shaping the Change + +To bring this vision into reality, the C3P0 sets out a clear policy and +practical guidelines designed to nurture a healthier digital culture. The +policy underscores the crucial shift away from metrics-driven engagement +towards a values-driven engagement that prioritizes collaborative and +constructive participation. It establishes clear expectations, upholds +transparency, and enforces accountability. In doing so, we intend to +challenge the status quo, encourage digital platforms and creators to +adopt these principles, and ultimately, play our part in fixing the +internet, one digital community at a time. + + + +IV. Potential Impact: An Unexpected Upside + +While it's easy to assume that enforcing stricter community guidelines may +suppress activity and thereby reduce engagement, historical and +sociological patterns suggest otherwise. Just as the ban on smoking in +public spaces, which was initially perceived as detrimental to businesses +such as bars and restaurants, eventually increased their patronage, we +believe a similar pattern may emerge in online communities. + +By instituting these guidelines and pushing harmful behaviors to the +fringes, we may be able to create a more inviting environment. When online +spaces become more respectful, more inclusive, and more value-driven, they +also become more appealing to a broader audience. + +In essence, this is an appeal to quality over quantity: a smaller number +of high-quality, respectful interactions over a large volume of toxic +engagements. Yet, it's not only the quality that might improve - the +quantity might as well. + +By creating safer spaces online, we may actually foster deeper, more +meaningful interactions. We could attract more diverse voices, +perspectives, and ideas, which would otherwise be pushed out by toxic +behaviors. And so, paradoxically, by striving to create a more wholesome, +respectful environment - even if it means enforcing stricter guidelines +and seeing some decline in immediate engagement - we might actually drive +up engagement in the long run. + +In this way, the C3P0 isn't just an ethical and moral approach to digital +community management - it could also be a smarter, more sustainable +approach to fostering vibrant, engaging, and diverse online communities. +It's a win-win for everyone: a healthier, more enjoyable experience for +users, and a more engaged, diverse, and respectful community for platforms +and creators. + + + +V. Theory and Principles + +To embody our values and vision, we have identified several guiding +principles which serve as the backbone of our policy: + +A. Privilege of Participation + Engagement in our online community is a privilege, not a right. This + distinction underscores the need for every member to contribute positively + and constructively. Our community is not an inalienable public square but + a collective space built on shared norms and respect. It invites + individuals who uphold these values, enrich discussions, and respect + fellow participants. Those unable or unwilling to abide by these standards + may forfeit the privilege of participating in our community. + +B. Right to Consent + Every member of our community possesses the fundamental right to consent + to how they're treated. This principle asserts that one's online presence + doesn't imply an open invitation for harassment, abuse, or unwanted + attention. It repudiates the notion that by merely being online, + individuals are 'asking for' or 'exposing themselves to' potential harm. + We respect and uphold the rights of our members to engage in a manner that + they feel comfortable with, putting their safety and well-being over + superficial metrics of engagement. + +C. Time Vampirism + Time vampirism refers to behaviors that drain the collective energy and + time of our community. These behaviors, such as trolling, engaging in + baseless arguments, and continually shifting the goalposts, detract from + meaningful engagement and waste the valuable time of our members. We + consider time a precious commodity and take a strong stance against + practices that misuse or squander it. + +D. Collaborative Culture + We foster a culture of collaboration rather than competition. Many online + platforms promote competition, driving engagement through conflict and + outrage. This often results in divisive, harmful spaces that discourage + genuine discourse. In contrast, our community encourages collective + growth, mutual learning, and supportive interactions. We believe that a + collaborative environment, where members uplift and learn from each other, + leads to richer, more positive experiences. + +E. Constructive Participation + Constructive participation is the cornerstone of our community. We expect + all members to contribute in a manner that is thoughtful, respectful, and + aligned with the community's purpose. This involves listening to others, + providing insightful input, staying on-topic, and treating all + participants with kindness. Constructive participation recognizes and + appreciates the diversity of our community, fostering an environment where + different perspectives can be shared and valued without fear of hostility + or derision. + +F. Accountability + Accountability is central to the functioning of a healthy, respectful, and + collaborative online community. Existing algorithms and policies on many + online platforms often fall short in this regard, allowing harmful + behaviors to persist under the guise of 'free speech' or engagement. This + approach not only undermines the quality of interactions but can also have + severe mental health repercussions for users exposed to such behaviors. + Our policy asserts that accountability should be paramount in any online + community. Members should take responsibility for their actions, comments, + and the effect they have on others. + +G. Consequences + Just as in real life, there must be consequences for actions that harm or + disrupt the community. All too often, online platforms allow harmful + behaviors to go unchecked, breeding a toxic environment that discourages + meaningful engagement and collaboration. We stand firm against this trend. + Actions that violate the standards and guidelines of our community, such + as bullying, harassment, or any form of harm, must be met with clear, + swift, and appropriate consequences. These may range from warnings and + temporary suspensions to permanent bans, depending on the severity and + frequency of the violation. By enforcing consequences, we aim to create an + online environment that mirrors the respect, responsibility, and + accountability expected in our everyday offline interactions. + +The C3P0 is a bold step towards fostering a healthier, more supportive, +and collaborative internet culture. By implementing this policy, we aspire +to redirect the internet away from harmful practices and towards a +community that values respect, collaboration, and constructive +participation above all. + + + +VI. The Golden Rule: Don't Waste Time + +Time is the most precious commodity we possess, and within the context of +our policy, it becomes the 'new algorithm,' the key metric guiding our +community interactions. Our golden rule enshrines this concept: Don't +waste anyone's time. + +This principle acknowledges that every second spent within our community +is valuable 'signal.' It reflects our commitment to maximize the +'signal-to-noise' ratio in our community interactions. By 'signal,' we +mean meaningful, valuable contributions that enrich discussions and foster +a sense of community. 'Noise,' on the other hand, encompasses behaviors +and content that waste time, distract from meaningful engagement, or +disrupt the constructive community environment. + +This golden rule is inherently subjective and contextual, reflecting the +varied nature of discussions and dynamics across diverse internet +communities. What might be considered 'noise' or time-wasting in one +context might be acceptable in another. For instance, a light-hearted meme +may be a distraction in a focused technical discussion, but in a casual +conversation, it might foster a sense of community and fun. + +Our golden rule encourages every member to ponder before posting: Does +this contribution act as a valuable signal? Does it respect the time of +others and align with the community's purpose? This perspective flips the +conventional understanding of engagement on its head, with time and +'signal' quality becoming the critical factors over mere quantity. + +We trust our community members to understand and exercise this principle +effectively. We are not seeking to micromanage, but rather to foster a +culture where time, as the new algorithm, is respected and valued. By +promoting a high signal-to-noise ratio, we aim to nurture a collaborative +environment marked by meaningful engagement, mutual respect, and valuable +contributions. + + + +VII. Time-Wasting Behaviors: Reducing Noise + +The C3P0 defines certain behaviors as 'noise' within our community, which +tend to detract from the quality of interactions and waste the valuable +time of our members. Here, we offer a non-exhaustive list of behaviors +that are generally considered 'time-wasting'. Understanding these +behaviors can help our community members better adhere to our golden rule: +Don't waste anyone's time. + +A. Trolling + Trolling refers to intentionally disruptive actions aimed at provoking + negative reactions, derailing discussions, or sowing discord within the + community. This behavior serves no constructive purpose and wastes the + community's time by redirecting attention away from valuable discourse. + +B. Baseless Arguing + Engaging in arguments without any substantial basis or evidence, often for + the sake of arguing, is another form of time-wasting behavior. This not + only detracts from meaningful discussions but also creates a hostile + environment that discourages constructive participation. + +C. Shifting the Goalposts + This behavior involves continually changing the criteria or standards in a + discussion once they have been met. It results in endless debates that + waste time and stifle the productive exchange of ideas. It also includes + 'whataboutism' and other red herrings. + +D. Armchair Debating + Participating in debates about complex subjects without appropriate + knowledge, understanding, or consideration of expert opinions is often + unproductive and may mislead other community members, thus wasting their + time. This also includes 'oneupmanship' and sophistry. + +E. Disingenuous Behaviors + Any form of dishonesty, such as misrepresentation of facts, misleading + other members, or feigning ignorance to provoke a reaction, is considered + time-wasting. Authenticity and honesty are essential in creating a + community built on trust and mutual respect. + +F. Harassing Behaviors + Any actions that involve persistent unwanted attention, bullying, or + infringement on another member's right to consent are strictly considered + time-wasting and disrespectful. Our community places a high value on the + emotional well-being of all members, and harassment of any form will not + be tolerated. + +By clearly identifying these behaviors, we aim to promote self-awareness +among our members. We expect everyone in our community to refrain from +these time-wasting behaviors, to contribute positively to the +signal-to-noise ratio, and to respect the golden rule. We hope this +contributes to a collaborative, respectful, and engaging environment where +each interaction is a good use of everyone's time. + + + +VIII. Good Uses of Time: Amplifying Signal + +Following our Golden Rule, we want to emphasize behaviors that contribute +positively to our community's signal-to-noise ratio. These behaviors, or +'good uses of time,' are actively encouraged as they align with our vision +of cultivating a respectful, collaborative, and engaging online +environment. Again, this list isn't exhaustive but serves as an +illustration of potentially beneficial behaviors: + +A. Thoughtful Participation + Taking the time to craft well-thought-out responses, comments, or posts + that contribute to the topic at hand is highly valued. Thoughtful + participation fosters meaningful discussions and is a respectful use of + everyone's time. + +B. Active Listening + Active listening involves engaging with others' ideas, showing + understanding, and responding constructively. This behavior demonstrates + respect for others' time and effort in sharing their thoughts and fosters + an environment of mutual learning. + +C. Respectful Disagreement + Disagreements are inevitable in any community, but it's important to + handle them respectfully. Expressing disagreement in a thoughtful, + respectful manner that focuses on the idea rather than the person is a + productive use of time and enriches discussions. + +D. Asking Insightful Questions + Asking insightful questions can stimulate discussion, encourage deeper + thought, and promote mutual learning. These questions are often open-ended + and invite others to share their perspectives, experiences, or expertise. + +E. Sharing Knowledge + Sharing relevant information, expertise, or experiences that contribute to + a discussion is highly encouraged. It adds value to conversations and is a + good use of everyone's time. + +F. Constructive Feedback + Providing constructive feedback helps others improve, fosters mutual + growth, and strengthens the community. Remember to focus on the behavior + or the idea, not the person, and to communicate your feedback in a + respectful, supportive manner. + +By promoting these behaviors, we aim to cultivate a community environment +that values quality over quantity, respects each other's time, and fosters +meaningful, engaging interactions. We encourage all members to practice +these behaviors and contribute positively to the community's +signal-to-noise ratio. In this way, every interaction within our community +becomes a valuable signal, and a respectful use of everyone's time. + + + +IX. Consequences: Upholding Accountability + +Consequences for time-wasting or harmful behavior serve to uphold +accountability and maintain the respect, safety, and integrity of our +community. It is important to note that the capacity to enforce certain +consequences will depend on the specific capabilities of various digital +platforms. While we acknowledge this variability, the following is a +general guideline for understanding the potential consequences of +violating our policy. Again, this list is not exhaustive but offers a +range of possible actions: + +A. Warning + Initial minor offenses might result in a warning. This serves as an + opportunity for the offender to acknowledge their misstep and correct + their behavior. + +B. Temporary Suspension or Timeout + Repeated offenses or more severe misbehavior may result in temporary + suspension or a timeout. This punitive measure offers the offender a + period of reflection and the chance to reconsider their actions. + +C. Permanent Ban + In cases of extremely disruptive behavior or in the event of serious, + repeat offenses, a permanent ban may be enforced. This ensures the safety + and well-being of the rest of the community. + +D. Removal of Content + Certain offenses may necessitate the removal of the offender's content. + This can range from a single inappropriate comment to an entire thread, + depending on the severity of the violation. + +While the application of these consequences may vary from platform to +platform, the core principle remains the same: enforcing accountability +for harmful behaviors. We recognize that not all platforms allow for the +full range of these consequences, often providing only more extreme +measures like blocking or banning. In such cases, we encourage communities +to be judicious but firm in their enforcement. + +It's crucial to emphasize that the goal of these consequences isn't to +punish but to uphold the integrity, safety, and respect of our online +community. The same principle applies in various offline scenarios: flight +attendants wouldn't hesitate to enforce smoking prohibitions on a plane, +or if someone lit up a cigarette in a restaurant, they would be asked to +leave. These actions aren't seen as punishments, but as necessary measures +to ensure the safety and comfort of everyone else present. + +Likewise, the C3P0 policy sees the enforcement of consequences as a +commitment to creating a safe, respectful, and collaborative online +community. One person's noxious behavior can make the digital environment +unsafe and unpleasant for everyone else. By implementing consequences, +we're not punishing individuals but safeguarding the collective wellbeing +of our community. In this way, we are making the internet a better place +for everyone. + + + +X. Guidelines for Moderators: How to Use C3P0 Fairly + +The role of a moderator in implementing the C3P0 policy is crucial. It is +a challenging role that requires sensitivity, discernment, and a deep +understanding of our shared values and principles. The following +guidelines are designed to assist moderators in their role, ensuring that +they uphold the spirit of our policy while also encouraging lively, +constructive participation. + +A. Balance + The goal of our policy is not to suppress discussion or create + an environment of fear. While we adopt a Zero Tolerance stance towards + harmful behaviors, it is equally important to balance this with an + encouraging, inclusive atmosphere for genuine, well-meaning participants. + It's critical to foster an environment where users feel free to express + themselves, debate, and share ideas without fear of undue reprisal. The + aim should be to strike a balance where all users feel safe and heard, but + not silenced. + +B. Transparency + Being transparent about the rules, decisions, and actions + taken is crucial for fostering trust within the community. When enforcing + consequences, explain the reason clearly, citing the specific violation of + the policy. This clarity will not only help the individual understand + their misstep but also serve as a learning opportunity for the rest of the + community. Openly communicate any changes or updates to the community + guidelines, and provide reasons behind these modifications. Additionally, + consider creating a publicly accessible log of moderation actions (while + maintaining user privacy), which can demonstrate your commitment to + fairness and accountability. + +C. Consistency and Fairness + Treat all members of the community with equal + respect and fairness, regardless of their status or popularity. Ensure + that the policy is applied consistently to everyone. Avoid showing + favoritism or bias, as this can damage the trust and harmony within the + community. For instance, a new user violating the guidelines should + receive the same treatment as a seasoned member. In cases of rule + violation, communicate clearly about the infringement, the relevant + section of the policy it contravenes, and the subsequent action taken. By + doing so, you demonstrate transparency and uphold the principle of + fairness. + +D. Proactive Engagement + Anticipate potential issues and respond to them + before they escalate. This could involve addressing emerging conflicts, + clarifying misunderstandings, or reiterating community guidelines as + necessary. Being proactive also means guiding discussions constructively + to prevent them from spiraling into negativity or toxicity. For instance, + if you observe a conversation heating up, consider stepping in with a + reminder about respectful dialogue or steering the conversation back on + track. This proactive approach can maintain a positive environment and + prevent the need for punitive measures. + +E. Diplomacy and Empathy + The essence of moderation is not in the exercise + of power, but in diplomacy and empathy. When enforcing guidelines, + approach the situation with understanding and tact. Aim to guide rather + than chastise, keeping in mind that your goal is to foster a respectful, + constructive community. Before taking action, consider the context, the + user's history, and the potential for misunderstanding. If possible, + privately communicate with the user in question to address the issue, + explaining the violation and the necessity for the guideline. This + diplomatic approach can help resolve issues without resorting to public + penalties, which should be used as a last resort. + +Always remember that there is a person behind each username, with their +own experiences, perspectives, and feelings. Strive to foster a supportive +and understanding atmosphere where everyone feels respected and heard. +While firmness is necessary to maintain order and respect, it should +always be balanced with empathy and respect for individual dignity. + +The goal of the C3P0 is not just to penalize harmful behavior but to +actively encourage a positive, respectful, and collaborative culture. As a +moderator, your role is pivotal in shaping the tone and culture of the +community. Your fair and balanced approach to implementing this policy +will be key in creating an online space where everyone feels valued, +respected, and free to contribute. +``` From 52faefafeab7e5012220dfed84162481b81475e0 Mon Sep 17 00:00:00 2001 From: David Shapiro Date: Sun, 12 Nov 2023 08:17:49 -0500 Subject: [PATCH 049/141] Create code_of_conduct.md --- code_of_conduct.md | 489 +++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 489 insertions(+) create mode 100644 code_of_conduct.md diff --git a/code_of_conduct.md b/code_of_conduct.md new file mode 100644 index 0000000..b77f675 --- /dev/null +++ b/code_of_conduct.md @@ -0,0 +1,489 @@ +# HAAS Code of Conduct + +## TLDR + +**Maximize Signal-to-Noise Ratio** + +1. **DON'T WASTE TIME:** Before you post, commit, or comment, make sure that your contribution is not going to waste anyone's time. Low effort and low value contributions will be removed. Repeat offenses will result in a ban. This reduces NOISE. +2. **ADD VALUE:** You should be seeking to maximize value added to the collective. Optimize your contributions to be succint, impactful, meaningful, and helpful. This boosts SIGNAL. +3. **DO NO HARM:** Zero tolerance for insults, trolling, moving the goalposts, and frivolous debates. Reduces noise and removes disruptors. + +For more information, check out the original C3P0 below. Participation in this project and community is a privilege, not a right. Act accordingly. + +## Don't Waste Time + +Time is our most scarce resource and is a proxy for everything else important; cognitive energy, and so on. Optimize your contributions to be a good use of time. That means you should expend your time wisely, and ensure that the time spent reading your contribution is also a good use of time. + +Good Uses of Time: +- High quality posts that are well thought out, well structured, easy to read, and not gigantic walls of text. LESS IS MORE. +- Demonstrations that quickly and succinctly convey the most information in the least amount of time. +- Examples that quickly convey how and why things work + +Bad Uses of Time: +- Debating, arguing, and quibbling over details or superfluous topics +- Moving the goalposts, changing the scope, or naysaying +- Walls of text, personal opinions, irrelevant contributions + +## Add Value + +The key here is to optimize value-add. If you don't have something particularly valuable to add, don't add anything. But, if you do have something valuable to add, please do so immediately and in an manner optimized for best use of time. + +Valuable Additions: +- Solve problems +- Add code +- Share resources +- Increase collective's understanding + +## Do No Harm + +This is boilerplate anti-trolling, flaming, and griefing. This behavior will result in an immediate irrevocable ban. Removing dirsuptors results in better signal to noise ratio. + + +## C3P0 +https://github.com/daveshap/C3P0 + +```markdown +Collaborative Culture Community Policy: Zero Tolerance + + + +I. Preamble: Understanding the Present + +The Collaborative Culture Community Policy Zero Tolerance (C3P0) arises +from a critical understanding of contemporary digital culture. We +recognize the pervasive influence of engagement-driven algorithms in our +online spaces, which often amplify and incentivize toxic competition, +harmful behaviors, and low-quality interactions. These algorithms, in +their relentless pursuit of engagement metrics, often prioritize volume +over value, controversy over collaboration, and outrage over +understanding. + + + +II. Vision: Imagining a Better Future + +The C3P0 puts forth a vision for online communities that depart from these +harmful norms and instead, place the well-being of their participants at +their core. We envision digital spaces where the time, energy, and +emotional well-being of participants are respected, where interactions are +thoughtful, where discussions are fruitful, and where the digital +experience is marked by empathy, respect, and understanding. We +acknowledge that this approach may result in lower engagement levels +quantitatively, but qualitatively, it promises to elevate the online +experience and create a more meaningful, inclusive, and enjoyable digital +landscape. + + + +III. Intent: Shaping the Change + +To bring this vision into reality, the C3P0 sets out a clear policy and +practical guidelines designed to nurture a healthier digital culture. The +policy underscores the crucial shift away from metrics-driven engagement +towards a values-driven engagement that prioritizes collaborative and +constructive participation. It establishes clear expectations, upholds +transparency, and enforces accountability. In doing so, we intend to +challenge the status quo, encourage digital platforms and creators to +adopt these principles, and ultimately, play our part in fixing the +internet, one digital community at a time. + + + +IV. Potential Impact: An Unexpected Upside + +While it's easy to assume that enforcing stricter community guidelines may +suppress activity and thereby reduce engagement, historical and +sociological patterns suggest otherwise. Just as the ban on smoking in +public spaces, which was initially perceived as detrimental to businesses +such as bars and restaurants, eventually increased their patronage, we +believe a similar pattern may emerge in online communities. + +By instituting these guidelines and pushing harmful behaviors to the +fringes, we may be able to create a more inviting environment. When online +spaces become more respectful, more inclusive, and more value-driven, they +also become more appealing to a broader audience. + +In essence, this is an appeal to quality over quantity: a smaller number +of high-quality, respectful interactions over a large volume of toxic +engagements. Yet, it's not only the quality that might improve - the +quantity might as well. + +By creating safer spaces online, we may actually foster deeper, more +meaningful interactions. We could attract more diverse voices, +perspectives, and ideas, which would otherwise be pushed out by toxic +behaviors. And so, paradoxically, by striving to create a more wholesome, +respectful environment - even if it means enforcing stricter guidelines +and seeing some decline in immediate engagement - we might actually drive +up engagement in the long run. + +In this way, the C3P0 isn't just an ethical and moral approach to digital +community management - it could also be a smarter, more sustainable +approach to fostering vibrant, engaging, and diverse online communities. +It's a win-win for everyone: a healthier, more enjoyable experience for +users, and a more engaged, diverse, and respectful community for platforms +and creators. + + + +V. Theory and Principles + +To embody our values and vision, we have identified several guiding +principles which serve as the backbone of our policy: + +A. Privilege of Participation + Engagement in our online community is a privilege, not a right. This + distinction underscores the need for every member to contribute positively + and constructively. Our community is not an inalienable public square but + a collective space built on shared norms and respect. It invites + individuals who uphold these values, enrich discussions, and respect + fellow participants. Those unable or unwilling to abide by these standards + may forfeit the privilege of participating in our community. + +B. Right to Consent + Every member of our community possesses the fundamental right to consent + to how they're treated. This principle asserts that one's online presence + doesn't imply an open invitation for harassment, abuse, or unwanted + attention. It repudiates the notion that by merely being online, + individuals are 'asking for' or 'exposing themselves to' potential harm. + We respect and uphold the rights of our members to engage in a manner that + they feel comfortable with, putting their safety and well-being over + superficial metrics of engagement. + +C. Time Vampirism + Time vampirism refers to behaviors that drain the collective energy and + time of our community. These behaviors, such as trolling, engaging in + baseless arguments, and continually shifting the goalposts, detract from + meaningful engagement and waste the valuable time of our members. We + consider time a precious commodity and take a strong stance against + practices that misuse or squander it. + +D. Collaborative Culture + We foster a culture of collaboration rather than competition. Many online + platforms promote competition, driving engagement through conflict and + outrage. This often results in divisive, harmful spaces that discourage + genuine discourse. In contrast, our community encourages collective + growth, mutual learning, and supportive interactions. We believe that a + collaborative environment, where members uplift and learn from each other, + leads to richer, more positive experiences. + +E. Constructive Participation + Constructive participation is the cornerstone of our community. We expect + all members to contribute in a manner that is thoughtful, respectful, and + aligned with the community's purpose. This involves listening to others, + providing insightful input, staying on-topic, and treating all + participants with kindness. Constructive participation recognizes and + appreciates the diversity of our community, fostering an environment where + different perspectives can be shared and valued without fear of hostility + or derision. + +F. Accountability + Accountability is central to the functioning of a healthy, respectful, and + collaborative online community. Existing algorithms and policies on many + online platforms often fall short in this regard, allowing harmful + behaviors to persist under the guise of 'free speech' or engagement. This + approach not only undermines the quality of interactions but can also have + severe mental health repercussions for users exposed to such behaviors. + Our policy asserts that accountability should be paramount in any online + community. Members should take responsibility for their actions, comments, + and the effect they have on others. + +G. Consequences + Just as in real life, there must be consequences for actions that harm or + disrupt the community. All too often, online platforms allow harmful + behaviors to go unchecked, breeding a toxic environment that discourages + meaningful engagement and collaboration. We stand firm against this trend. + Actions that violate the standards and guidelines of our community, such + as bullying, harassment, or any form of harm, must be met with clear, + swift, and appropriate consequences. These may range from warnings and + temporary suspensions to permanent bans, depending on the severity and + frequency of the violation. By enforcing consequences, we aim to create an + online environment that mirrors the respect, responsibility, and + accountability expected in our everyday offline interactions. + +The C3P0 is a bold step towards fostering a healthier, more supportive, +and collaborative internet culture. By implementing this policy, we aspire +to redirect the internet away from harmful practices and towards a +community that values respect, collaboration, and constructive +participation above all. + + + +VI. The Golden Rule: Don't Waste Time + +Time is the most precious commodity we possess, and within the context of +our policy, it becomes the 'new algorithm,' the key metric guiding our +community interactions. Our golden rule enshrines this concept: Don't +waste anyone's time. + +This principle acknowledges that every second spent within our community +is valuable 'signal.' It reflects our commitment to maximize the +'signal-to-noise' ratio in our community interactions. By 'signal,' we +mean meaningful, valuable contributions that enrich discussions and foster +a sense of community. 'Noise,' on the other hand, encompasses behaviors +and content that waste time, distract from meaningful engagement, or +disrupt the constructive community environment. + +This golden rule is inherently subjective and contextual, reflecting the +varied nature of discussions and dynamics across diverse internet +communities. What might be considered 'noise' or time-wasting in one +context might be acceptable in another. For instance, a light-hearted meme +may be a distraction in a focused technical discussion, but in a casual +conversation, it might foster a sense of community and fun. + +Our golden rule encourages every member to ponder before posting: Does +this contribution act as a valuable signal? Does it respect the time of +others and align with the community's purpose? This perspective flips the +conventional understanding of engagement on its head, with time and +'signal' quality becoming the critical factors over mere quantity. + +We trust our community members to understand and exercise this principle +effectively. We are not seeking to micromanage, but rather to foster a +culture where time, as the new algorithm, is respected and valued. By +promoting a high signal-to-noise ratio, we aim to nurture a collaborative +environment marked by meaningful engagement, mutual respect, and valuable +contributions. + + + +VII. Time-Wasting Behaviors: Reducing Noise + +The C3P0 defines certain behaviors as 'noise' within our community, which +tend to detract from the quality of interactions and waste the valuable +time of our members. Here, we offer a non-exhaustive list of behaviors +that are generally considered 'time-wasting'. Understanding these +behaviors can help our community members better adhere to our golden rule: +Don't waste anyone's time. + +A. Trolling + Trolling refers to intentionally disruptive actions aimed at provoking + negative reactions, derailing discussions, or sowing discord within the + community. This behavior serves no constructive purpose and wastes the + community's time by redirecting attention away from valuable discourse. + +B. Baseless Arguing + Engaging in arguments without any substantial basis or evidence, often for + the sake of arguing, is another form of time-wasting behavior. This not + only detracts from meaningful discussions but also creates a hostile + environment that discourages constructive participation. + +C. Shifting the Goalposts + This behavior involves continually changing the criteria or standards in a + discussion once they have been met. It results in endless debates that + waste time and stifle the productive exchange of ideas. It also includes + 'whataboutism' and other red herrings. + +D. Armchair Debating + Participating in debates about complex subjects without appropriate + knowledge, understanding, or consideration of expert opinions is often + unproductive and may mislead other community members, thus wasting their + time. This also includes 'oneupmanship' and sophistry. + +E. Disingenuous Behaviors + Any form of dishonesty, such as misrepresentation of facts, misleading + other members, or feigning ignorance to provoke a reaction, is considered + time-wasting. Authenticity and honesty are essential in creating a + community built on trust and mutual respect. + +F. Harassing Behaviors + Any actions that involve persistent unwanted attention, bullying, or + infringement on another member's right to consent are strictly considered + time-wasting and disrespectful. Our community places a high value on the + emotional well-being of all members, and harassment of any form will not + be tolerated. + +By clearly identifying these behaviors, we aim to promote self-awareness +among our members. We expect everyone in our community to refrain from +these time-wasting behaviors, to contribute positively to the +signal-to-noise ratio, and to respect the golden rule. We hope this +contributes to a collaborative, respectful, and engaging environment where +each interaction is a good use of everyone's time. + + + +VIII. Good Uses of Time: Amplifying Signal + +Following our Golden Rule, we want to emphasize behaviors that contribute +positively to our community's signal-to-noise ratio. These behaviors, or +'good uses of time,' are actively encouraged as they align with our vision +of cultivating a respectful, collaborative, and engaging online +environment. Again, this list isn't exhaustive but serves as an +illustration of potentially beneficial behaviors: + +A. Thoughtful Participation + Taking the time to craft well-thought-out responses, comments, or posts + that contribute to the topic at hand is highly valued. Thoughtful + participation fosters meaningful discussions and is a respectful use of + everyone's time. + +B. Active Listening + Active listening involves engaging with others' ideas, showing + understanding, and responding constructively. This behavior demonstrates + respect for others' time and effort in sharing their thoughts and fosters + an environment of mutual learning. + +C. Respectful Disagreement + Disagreements are inevitable in any community, but it's important to + handle them respectfully. Expressing disagreement in a thoughtful, + respectful manner that focuses on the idea rather than the person is a + productive use of time and enriches discussions. + +D. Asking Insightful Questions + Asking insightful questions can stimulate discussion, encourage deeper + thought, and promote mutual learning. These questions are often open-ended + and invite others to share their perspectives, experiences, or expertise. + +E. Sharing Knowledge + Sharing relevant information, expertise, or experiences that contribute to + a discussion is highly encouraged. It adds value to conversations and is a + good use of everyone's time. + +F. Constructive Feedback + Providing constructive feedback helps others improve, fosters mutual + growth, and strengthens the community. Remember to focus on the behavior + or the idea, not the person, and to communicate your feedback in a + respectful, supportive manner. + +By promoting these behaviors, we aim to cultivate a community environment +that values quality over quantity, respects each other's time, and fosters +meaningful, engaging interactions. We encourage all members to practice +these behaviors and contribute positively to the community's +signal-to-noise ratio. In this way, every interaction within our community +becomes a valuable signal, and a respectful use of everyone's time. + + + +IX. Consequences: Upholding Accountability + +Consequences for time-wasting or harmful behavior serve to uphold +accountability and maintain the respect, safety, and integrity of our +community. It is important to note that the capacity to enforce certain +consequences will depend on the specific capabilities of various digital +platforms. While we acknowledge this variability, the following is a +general guideline for understanding the potential consequences of +violating our policy. Again, this list is not exhaustive but offers a +range of possible actions: + +A. Warning + Initial minor offenses might result in a warning. This serves as an + opportunity for the offender to acknowledge their misstep and correct + their behavior. + +B. Temporary Suspension or Timeout + Repeated offenses or more severe misbehavior may result in temporary + suspension or a timeout. This punitive measure offers the offender a + period of reflection and the chance to reconsider their actions. + +C. Permanent Ban + In cases of extremely disruptive behavior or in the event of serious, + repeat offenses, a permanent ban may be enforced. This ensures the safety + and well-being of the rest of the community. + +D. Removal of Content + Certain offenses may necessitate the removal of the offender's content. + This can range from a single inappropriate comment to an entire thread, + depending on the severity of the violation. + +While the application of these consequences may vary from platform to +platform, the core principle remains the same: enforcing accountability +for harmful behaviors. We recognize that not all platforms allow for the +full range of these consequences, often providing only more extreme +measures like blocking or banning. In such cases, we encourage communities +to be judicious but firm in their enforcement. + +It's crucial to emphasize that the goal of these consequences isn't to +punish but to uphold the integrity, safety, and respect of our online +community. The same principle applies in various offline scenarios: flight +attendants wouldn't hesitate to enforce smoking prohibitions on a plane, +or if someone lit up a cigarette in a restaurant, they would be asked to +leave. These actions aren't seen as punishments, but as necessary measures +to ensure the safety and comfort of everyone else present. + +Likewise, the C3P0 policy sees the enforcement of consequences as a +commitment to creating a safe, respectful, and collaborative online +community. One person's noxious behavior can make the digital environment +unsafe and unpleasant for everyone else. By implementing consequences, +we're not punishing individuals but safeguarding the collective wellbeing +of our community. In this way, we are making the internet a better place +for everyone. + + + +X. Guidelines for Moderators: How to Use C3P0 Fairly + +The role of a moderator in implementing the C3P0 policy is crucial. It is +a challenging role that requires sensitivity, discernment, and a deep +understanding of our shared values and principles. The following +guidelines are designed to assist moderators in their role, ensuring that +they uphold the spirit of our policy while also encouraging lively, +constructive participation. + +A. Balance + The goal of our policy is not to suppress discussion or create + an environment of fear. While we adopt a Zero Tolerance stance towards + harmful behaviors, it is equally important to balance this with an + encouraging, inclusive atmosphere for genuine, well-meaning participants. + It's critical to foster an environment where users feel free to express + themselves, debate, and share ideas without fear of undue reprisal. The + aim should be to strike a balance where all users feel safe and heard, but + not silenced. + +B. Transparency + Being transparent about the rules, decisions, and actions + taken is crucial for fostering trust within the community. When enforcing + consequences, explain the reason clearly, citing the specific violation of + the policy. This clarity will not only help the individual understand + their misstep but also serve as a learning opportunity for the rest of the + community. Openly communicate any changes or updates to the community + guidelines, and provide reasons behind these modifications. Additionally, + consider creating a publicly accessible log of moderation actions (while + maintaining user privacy), which can demonstrate your commitment to + fairness and accountability. + +C. Consistency and Fairness + Treat all members of the community with equal + respect and fairness, regardless of their status or popularity. Ensure + that the policy is applied consistently to everyone. Avoid showing + favoritism or bias, as this can damage the trust and harmony within the + community. For instance, a new user violating the guidelines should + receive the same treatment as a seasoned member. In cases of rule + violation, communicate clearly about the infringement, the relevant + section of the policy it contravenes, and the subsequent action taken. By + doing so, you demonstrate transparency and uphold the principle of + fairness. + +D. Proactive Engagement + Anticipate potential issues and respond to them + before they escalate. This could involve addressing emerging conflicts, + clarifying misunderstandings, or reiterating community guidelines as + necessary. Being proactive also means guiding discussions constructively + to prevent them from spiraling into negativity or toxicity. For instance, + if you observe a conversation heating up, consider stepping in with a + reminder about respectful dialogue or steering the conversation back on + track. This proactive approach can maintain a positive environment and + prevent the need for punitive measures. + +E. Diplomacy and Empathy + The essence of moderation is not in the exercise + of power, but in diplomacy and empathy. When enforcing guidelines, + approach the situation with understanding and tact. Aim to guide rather + than chastise, keeping in mind that your goal is to foster a respectful, + constructive community. Before taking action, consider the context, the + user's history, and the potential for misunderstanding. If possible, + privately communicate with the user in question to address the issue, + explaining the violation and the necessity for the guideline. This + diplomatic approach can help resolve issues without resorting to public + penalties, which should be used as a last resort. + +Always remember that there is a person behind each username, with their +own experiences, perspectives, and feelings. Strive to foster a supportive +and understanding atmosphere where everyone feels respected and heard. +While firmness is necessary to maintain order and respect, it should +always be balanced with empathy and respect for individual dignity. + +The goal of the C3P0 is not just to penalize harmful behavior but to +actively encourage a positive, respectful, and collaborative culture. As a +moderator, your role is pivotal in shaping the tone and culture of the +community. Your fair and balanced approach to implementing this policy +will be key in creating an online space where everyone feels valued, +respected, and free to contribute. +``` From 5160fd3d350d53ed0752f88f6949765a2ef9eb2f Mon Sep 17 00:00:00 2001 From: Oliver Wiggins-Hay Date: Sun, 12 Nov 2023 13:18:06 +0000 Subject: [PATCH 050/141] Hotfix to make functions folder if it doesn't already exist Hotfix as title describes. Minimal functional change --- .../assistant_manager.cpython-39.pyc | Bin 4322 -> 4322 bytes .../__pycache__/chat_manager.cpython-39.pyc | Bin 5472 -> 5581 bytes .../__pycache__/tool_manager.cpython-39.pyc | Bin 1155 -> 1155 bytes tool_maker/chat_manager.py | 6 +++++- 4 files changed, 5 insertions(+), 1 deletion(-) diff --git a/tool_maker/__pycache__/assistant_manager.cpython-39.pyc b/tool_maker/__pycache__/assistant_manager.cpython-39.pyc index 5af6ff3cecec32396f1e7a3e3ba6160d6efdd33d..a4a2c9bfb8fed20823c1add803bdb849a4c02861 100644 GIT binary patch delta 19 ZcmaE)_(+i}k(ZZ?0SF$S-NA!tQ_WHLoec0a~>G??UWBHhi+?cPHw7iCqkQHCg>m*Ka9TkmY zVkfzaiYXU$qdLx;O?!GLvkR7%_mHc)+C$`ObtRwOe)Kc8*c?84J1b7B;Tp1$3y58G z6IsgqdT&%N`*z7$A#}X57A=Yt0UHPVG!>pM*UA5#c#%Mv%p^B#- z!zgH*5Mjf4=EwU7y0`G!V~jR37_DEZ;bPM5h{)#3 zZAf)rp$&_axiWS!pRQ1{>XoaZ4Wu0J8Z;m|?R#NVD3>ZTm)U~x_3&#DP{JjMfw_9L z4O&Jo3%J6x#Bf&%9AZ4tXq)Z1Gz|{s!EE9UtX~89b@o;A>ii2lz8)K^tZ{sAJ@Fu{ z778Q`H)tn#bOCe&^a!x}XiWU^Tv%8pCC6T*<&sTFSP_Fm8l|0OR;vHf71#lDBw6N3 zCk#a$P7sA$uU1^aDIJ1UNq+{UR`wt@xT%ywuEM-=o!l2YLaK{Ju9a3?k8-tKApuX` zqh3UOFCZa(1-1?YoD*QRP&s<%!7v6e4sa1*LIBqrc~eL)-<)vAw`n!Vw=Y)NKdHep zr@>QQ^n5}`VCMw@IcQTLCD>bLpKb{;yut38vsxCLxQFeT&o`l``VrZ+(y6~J|;BFJ*7BsUM{&jI8J-UjIl0ao%@L1XNXZ2P37Ch^}) zg;b{fQkjh^dEzcv@&aIP^-L9zj$Xe-{x_k=d)(_SE^deJEm$mwNf4pTO0H6Ga2z>9uhzqfB> z;d4N90386M?91G#P1!=AuyO1ikOaBML9z_jE|eUe5n$kqYG_8x$SGJpFU%RE#h5(Q z{-)qm@iT#x`Rt8{2zE<**eh1Ve4qZ@>E^Y!U#6Kxb aZcLZTZq+9ijgf7;VebyGyRl(1%YKd>9v{3(u6;_R?Ie-5 zW*dapEE8rL$rj<7owr6iq>i$(b$EUm zhfrLhOJur1!1Hx0VG!x{n#WauNgNe|^`Vt!-&+%}tJNB__`mGRW-2!zPk%6A(4vMUk8Y-#c`}deG(7N&hw6#Q++DPd6X&LQ0W}8bvB+ z>xvR{P3XsAq;0yTI^}&Un4IP_;bjDo;o}yxJK4bsA1UsAmi(OP7jV*>Y&^9v{(tlx z1>VA0IkuT{X4cRYxo6=b+WUfZ6Nk5PV3OTWEjCBd-0wqp1Kl|ek?z*GEFK7<-ofy@ z2%>E+q4gdI$Lt|=fHZ(Tk&gonYSl`>l@%mue(Jm zGV944my5&=o^qjH@x!m!(%^OzA2i4yWD!Iy<7XaUg-E|Bo}f;lQRq%0m(hy<7sM4- zc?^x<2nT~C6+<%=V^AjI`^(N5ea6Bg?{840CMT4Nbi7io5BRVGu8eGa8VY5%;$EQu z3dSc8#4Qu==vj2TLVtj6_ic62#bgf3GBR@Is#h=3Qv?MKP&%6!g#YEINLu+9v35*b diff --git a/tool_maker/__pycache__/tool_manager.cpython-39.pyc b/tool_maker/__pycache__/tool_manager.cpython-39.pyc index 60637ec851e5ad2445e3365e8ca04bc5386d476f..29f71ea6ece1e5089c0cbc662cb3a80ed8a38132 100644 GIT binary patch delta 19 ZcmZqXZ06)j Date: Sun, 12 Nov 2023 13:18:06 +0000 Subject: [PATCH 051/141] Hotfix to make functions folder if it doesn't already exist Hotfix as title describes. Minimal functional change --- .../assistant_manager.cpython-39.pyc | Bin 4322 -> 4322 bytes .../__pycache__/chat_manager.cpython-39.pyc | Bin 5472 -> 5581 bytes .../__pycache__/tool_manager.cpython-39.pyc | Bin 1155 -> 1155 bytes tool_maker/chat_manager.py | 6 +++++- 4 files changed, 5 insertions(+), 1 deletion(-) diff --git a/tool_maker/__pycache__/assistant_manager.cpython-39.pyc b/tool_maker/__pycache__/assistant_manager.cpython-39.pyc index 5af6ff3cecec32396f1e7a3e3ba6160d6efdd33d..a4a2c9bfb8fed20823c1add803bdb849a4c02861 100644 GIT binary patch delta 19 ZcmaE)_(+i}k(ZZ?0SF$S-NA!tQ_WHLoec0a~>G??UWBHhi+?cPHw7iCqkQHCg>m*Ka9TkmY zVkfzaiYXU$qdLx;O?!GLvkR7%_mHc)+C$`ObtRwOe)Kc8*c?84J1b7B;Tp1$3y58G z6IsgqdT&%N`*z7$A#}X57A=Yt0UHPVG!>pM*UA5#c#%Mv%p^B#- z!zgH*5Mjf4=EwU7y0`G!V~jR37_DEZ;bPM5h{)#3 zZAf)rp$&_axiWS!pRQ1{>XoaZ4Wu0J8Z;m|?R#NVD3>ZTm)U~x_3&#DP{JjMfw_9L z4O&Jo3%J6x#Bf&%9AZ4tXq)Z1Gz|{s!EE9UtX~89b@o;A>ii2lz8)K^tZ{sAJ@Fu{ z778Q`H)tn#bOCe&^a!x}XiWU^Tv%8pCC6T*<&sTFSP_Fm8l|0OR;vHf71#lDBw6N3 zCk#a$P7sA$uU1^aDIJ1UNq+{UR`wt@xT%ywuEM-=o!l2YLaK{Ju9a3?k8-tKApuX` zqh3UOFCZa(1-1?YoD*QRP&s<%!7v6e4sa1*LIBqrc~eL)-<)vAw`n!Vw=Y)NKdHep zr@>QQ^n5}`VCMw@IcQTLCD>bLpKb{;yut38vsxCLxQFeT&o`l``VrZ+(y6~J|;BFJ*7BsUM{&jI8J-UjIl0ao%@L1XNXZ2P37Ch^}) zg;b{fQkjh^dEzcv@&aIP^-L9zj$Xe-{x_k=d)(_SE^deJEm$mwNf4pTO0H6Ga2z>9uhzqfB> z;d4N90386M?91G#P1!=AuyO1ikOaBML9z_jE|eUe5n$kqYG_8x$SGJpFU%RE#h5(Q z{-)qm@iT#x`Rt8{2zE<**eh1Ve4qZ@>E^Y!U#6Kxb aZcLZTZq+9ijgf7;VebyGyRl(1%YKd>9v{3(u6;_R?Ie-5 zW*dapEE8rL$rj<7owr6iq>i$(b$EUm zhfrLhOJur1!1Hx0VG!x{n#WauNgNe|^`Vt!-&+%}tJNB__`mGRW-2!zPk%6A(4vMUk8Y-#c`}deG(7N&hw6#Q++DPd6X&LQ0W}8bvB+ z>xvR{P3XsAq;0yTI^}&Un4IP_;bjDo;o}yxJK4bsA1UsAmi(OP7jV*>Y&^9v{(tlx z1>VA0IkuT{X4cRYxo6=b+WUfZ6Nk5PV3OTWEjCBd-0wqp1Kl|ek?z*GEFK7<-ofy@ z2%>E+q4gdI$Lt|=fHZ(Tk&gonYSl`>l@%mue(Jm zGV944my5&=o^qjH@x!m!(%^OzA2i4yWD!Iy<7XaUg-E|Bo}f;lQRq%0m(hy<7sM4- zc?^x<2nT~C6+<%=V^AjI`^(N5ea6Bg?{840CMT4Nbi7io5BRVGu8eGa8VY5%;$EQu z3dSc8#4Qu==vj2TLVtj6_ic62#bgf3GBR@Is#h=3Qv?MKP&%6!g#YEINLu+9v35*b diff --git a/tool_maker/__pycache__/tool_manager.cpython-39.pyc b/tool_maker/__pycache__/tool_manager.cpython-39.pyc index 60637ec851e5ad2445e3365e8ca04bc5386d476f..29f71ea6ece1e5089c0cbc662cb3a80ed8a38132 100644 GIT binary patch delta 19 ZcmZqXZ06)j Date: Sun, 12 Nov 2023 13:23:18 +0000 Subject: [PATCH 052/141] Removed PYC files removed as gitignore not currently formatted properly --- .../__pycache__/assistant_manager.cpython-39.pyc | Bin 4322 -> 0 bytes .../__pycache__/chat_manager.cpython-39.pyc | Bin 5581 -> 0 bytes .../__pycache__/tool_manager.cpython-39.pyc | Bin 1155 -> 0 bytes 3 files changed, 0 insertions(+), 0 deletions(-) delete mode 100644 tool_maker/__pycache__/assistant_manager.cpython-39.pyc delete mode 100644 tool_maker/__pycache__/chat_manager.cpython-39.pyc delete mode 100644 tool_maker/__pycache__/tool_manager.cpython-39.pyc diff --git a/tool_maker/__pycache__/assistant_manager.cpython-39.pyc b/tool_maker/__pycache__/assistant_manager.cpython-39.pyc deleted file mode 100644 index a4a2c9bfb8fed20823c1add803bdb849a4c02861..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 4322 zcmb_f-EQ2*73Pp!?rOD?e;OB6>ZVhrPP?h4*hSDE!*F9XX&?uY-6}xqtpZ}inU%QW zlJt-(WmV})xiyNSFCbm|A^He?0p9j1uaKg3zcb`gyKA9v&~m}yaA)Ss`T5S7k78-5 zZQ=UI-=C=8Ud5AMrY{FCAK=bXG}4kRvWBe381?PQ?%5w((vj|$mUK_;o}*mZ*td0r zQ7>u?y`ERiZ1$R}dDfE6bJkl>o?4Kt=d9OOj%v#V^cL0PS?Ap9EvaR-bV~B=Evwu9 z3cFfeRxEy&B+;iL7Q0Gs2K49jW#i=o+}SVCD67Y$)w3n*IntKSmsZb}Pz%q7^kfrH zPqyR&o=w@7i+HwV2b(PwS083snB^kQtF6S}Ao@>jj%zg*L$!L3ul7fAkcUa^YxQiT zvV8S+IWAQe=rE-rnpTZ+5pyxh_YxiEW1c5`S3^7>CnGL`KxG;ICwQz5`9$zUU@A1W z%$3d{uT;V)l8;lPNU}3f!DM?glIkQ?IuF&{ic0kh88dRST!-;)wJVL#wK``GiQq|} z=X>fxQDZpGJ}G^0$-@6-PxVC#1C^{d|NJ?V;^Ljd}O@AL|Jv5T_hoSzS-~-1JV$HlW;?T$Ic$+`q=1{RYi3JGOkJ zGFp}+W2Aj*|M}*LojYglv3*XXjo)M3#qZQ%R($m?lI)Jf%_!wvu8E_U*_4uu|W zU#Pf!A$(XCmGyL7w0u8|!`$~dWM z&0Ik>qe6r?Hn0b9?z#1$b;=HyzI|++HO{RQ7QbH>srfl4_T-;|cT9aF4-&oS?1Y&B zHjf#C(ER$r?-Wv59UxIPH~v?q{7k%D!Wbp1L~96KSG5dq%*TaGNj_r_fro6`!kt}5 zvkMzOZ_~a5>o4{*cdHD*3#PAO^o3p6>vyz=u8B+8EaSCk-;YpPgJhUK_%YrylI*dg zpT6SPU6uQFbk^@jNg$%^0XmcAuP8_kHwu0mosbkVR?6x$WvlVoAkP}6_9?uGwtSfF(`uG>wLx)T~ zsc7jcy2ig(3UgLdc&WS@9H+j-Hc8)1` z26Mtg^;$lk2|N|9CT>%VAEA?5 zXE^1~oZKBW&e=KJC8$~hZ_pgH1`Fu5sZ^V%T|J%Z>Eg*`<9UOC2nc$cMPryq6%~t9 zM5*Y6F*wG^@Waehn{Gq@5L@e8)KKb`HK;*Zo^=;Bp|iJQa=x#V;RTY*Q(e<3R5a*B z%rqHP+^9In#Y|c}7cv~Nbo?q)bS9RI#C~me57NZ#1SYV_N``3idZR9gU&!*30=96GNkVc5%FAo0oAke141jF zi)h~*=a^O6Mr$w$pBX9{WgH3-AX$4!B=LDdbR*5wNG4`d6(ay}zg(Y&>wKecR5nfuBB8^6 z7zm?8?J_JtlqyC-Hqnl^L7mNKmLV_3E)x;Mu4{gN7{br6(x9G}3S!(B(n!MtkQ`az zME?LFL|&NGtg{j&Y_9UXOVeyq*m}#P&P92jPoh;!{R8f-gT`WQX4CIqUgPMU%M%CU zX3mSv?gBAT;$XgCw0$4-WfW1r!LR9@mbwK78WPq1I%5vv!Weve-4Br80We-$6}JP3mSyA%5Vk% diff --git a/tool_maker/__pycache__/chat_manager.cpython-39.pyc b/tool_maker/__pycache__/chat_manager.cpython-39.pyc deleted file mode 100644 index 8815af3b10d0dae933191d247204aa0f5f60bb54..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 5581 zcmcIo&668P6`!6jjb{$(K zTxU1zpcJqjt>bjcyY8OWnc~`#w!<2g{qpK=ZI4Q(-_p3rty>zmcJ#&!*Dq^z`%BEI z)mdi0+Yh317gvy+x)R6Tcl^j-3q*^mCw~?)ui=p$h(K#Ft~K<);Os8rIyY{a4Ktsl zVR4(6ZfOmhJG_i{DR6=^J>0!(@+n?HTbEaP4Rup|n$O@}In-vB&!J6~&qL9LZ0h{F zpN!Qag0-wT54gZoR#6s z`Ezf-eR1V2@6GpC&R@Cs?#giXQJYvEmP8;2-Q@H=y-`tkZu#AAGvt{a4-y>gefHsf zcKSXW)vKA+7h#lSPGLZq8TmaRZ4GEj5|Nd|UOyH|H(brCd9vAy`2b)vJ2H+ks~Z#C z%Dz||^Z-qn>6obe#?4mHClk%gWV?^XsF(GULSbf-K8J`!H|7 zHxMEgkD}lyF^HPQx(T$0hB{7oI0>OKhw196G5vvQ9%qZJrkiZ>!AbKln!k3ux|z*m zj{?z#&uIEmhBEQ7XEO(dxdCMhg&ux)qTTF7Y2L7FN=GHwn%yZ; zutAx-rlqAnu#dG*8Mki(-I4cLJ;s}CF|MELR65nv>bObmW6Z`&+_`NqvSb>4sxzI} zre??Gh!?kwk%PLK)JZKqg_+7}d3Tmq_86}&X-L(OnqJaI?!kISQ^{Oses=-=j_t8K zOfDpIyNh{_X~`3mf}PByQ;@S?n0#(W|J{jEC9UipPu)FO?fbAB6aO7(7R|kgeUvMw zG^YMnA+et>pXUp(%PQ>hP1xnq9k%@!SY~Z(nJzz;oJecRG8fZY=Sj@<1T1shM_Fd= zqnffzyp?6@X|+wE`S^+;(21uY4mm(_O?0@I@yG=T9E0{jWkd&zFz~5H6#sk?TOMcW4+%sK(u@(3FXMaeeQ?xuK_tC) zEWB1M6Hoe^=(g;g>&h6T`q&o?1|C2oi$h&|1GHZaJRXXmm2|f~_>jn7?FODNJ>TnZ zd+iVzjI&BX+}jG1bqs|LLEGC1wzp!zp&a*k>_u_n^+miHa`cY2bKT+rT(!5g9=6uK zP^vaKX!5ZER}0&WL+-6Y%Y3OAT5iiE=q-C!;B5m8so0YDvYKWqjCi~y?-}B!w1q#0 z$SPqZ6EP^bRAyienVoO;uzdac^?ZaciNFjEd#R9l8RD_eiXj4nRgKclmm8h2jpGfpvj24I-Jj5^)Rgr?pwK-wl&Yk7Z^OQOxY%CZI0`IpoahZQwLB zGX}NJ+$|9%K{t#7DZWkPK1<>|5cTPtOg%>h%OqYPaf-x?5Scxpdt;xPA|^nE#(c3t z%8$d`L8E3qA^DKZBmjx;(Xh{xP*ms@N-6U8#D6jCI$FqIKxmGs*FfE7bn}5@x{$Wz zGLyLv9QCd|uH!+qNJPDs6Cy9VW zG>pjd@XX~wzYhacg(8WL3z(?^A+f`iRg7(3sk(1ykQiqO$_%8A0JjX5c z`3U1y5&!iRd>AaizHMCt;~14AH`R7)DL6A&L1}k7EmQ2i@%oUsNTG?N00B+X5-)*G zFvRQSJd!6a=|!_qYW#M_?-ISQYLO1cOh9usbonopDnlpq;4Zbm1(P+k37E}vW>Z@C z6ks;TD*)KMk_(+<&{9v!D#BM+#1(9#cn_j6O;HrSUp9T^KZj*vwTK?4y~a#o<3*^& zCG9!td4#v98{(H#MOYit^*ln!hTZnz0XS~rH~W~0H_Qtc-?$*&r{)Tki{e+5rWieO zc_K)Pt0c%X?kmd2iZn?s>Dox2Q-~BNr5z%hMH{DMQX7B})SO!YvxT~Hijv$o#-;(9zu(8D9soWY=@Vt} zld8;2N_2VLpNYmX+vE#zLA2?*b=p(RNt}kV!@vE%NYd3Y*#d3TVn6*rB->ux_KMbp@hpooSO-i8tT_2I#*)N)wNf5c?R^Z6 zIN4E%0hBj|o1$j1x1TnPFc zlACM!ToDE{`-=K5l-n}J6`|IDol?$WLF7VE%qH?JJqVbN=~#~8EGQFW`u)mTXWIF} zIQNxmU7yA01$YNP%oW0SQ%(FFBFn!)j6b9)LMXmTm5NZQZ$58O_Q0QT${u_N3Mfko b9i?gJoXNjOoF$Ja>CE9D@Q2iovpM5GNc(&X diff --git a/tool_maker/__pycache__/tool_manager.cpython-39.pyc b/tool_maker/__pycache__/tool_manager.cpython-39.pyc deleted file mode 100644 index 29f71ea6ece1e5089c0cbc662cb3a80ed8a38132..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 1155 zcmZ`&&2AGh5VpNevfZ}gKo1lyRs=|qP!ZP%0acL@NKKK_QxwT^*G`*~{i(gHs!MWe z-vU*lM;?Y3*efSq0inXorb!nBUU@t}p7H1T)@pq{ATYjteQyptSkb5t@R3(A-F*<6 zXgVPS@`h-py$hnfb2gxw_DP3-hdAklDC=XH{}8-}z-}^N#?w@@3vx~e9CT0fN^j@} zRBD#1`$;m{Rk1oWHl(#`40H~22c{DsG|k9oIwe>16F|E3W@xi1y`nQZMOa?3lc^8J z2_@e3_jO+0x(fN`CB;xNPV=N?qBQBucvuE1PpT1CGaiA+`lq z`tfqq9rlvUxZ&=Z=%*?)!#O;;eQ4rL_D_|KhL|BExSCz4RlBy*vvRYFjcgLh1t2*; zsDXW}`1Tm{uYDfS;?e$Xv$y`6`3DVH2BB6oyar`K0ybUKO)>)tG=tTH)q|C5K4m~r zW9oryPC3ZdWoyQ(On!w_`Nh-yIaNRj-c(rQ(j<1KqHC1c=r0;e9;?WdtJ=8Gj#FgA zHUUm?w?-FkX?I1tcu;4mmQn>D*Mq6DkdztgAhu)?`7eT{IqNWsKexqhbQ%MHfp)O5 z?7B54xpA3H4vsCh-L9fQh!xdy;e%Oj%n6beYl}*EAmZ+UAYMTI>SL7s;*4E$&VKsM zLe$CoSBV}xCtpfzf|O+-Wt8ZAg8G`2C%Ky3MAlrUvT+EvK1#G*fg|l23Z%Iv5UX9h jLWh=584Xq#t-Ont;Eq~WRiwJ6|9pP_3w+fMxB1?0>b)M1 From d9e3fad680fa16902b969f6248e11231fae8d740 Mon Sep 17 00:00:00 2001 From: Oliver Wiggins-Hay Date: Sun, 12 Nov 2023 13:23:18 +0000 Subject: [PATCH 053/141] Removed PYC files removed as gitignore not currently formatted properly --- .../__pycache__/assistant_manager.cpython-39.pyc | Bin 4322 -> 0 bytes .../__pycache__/chat_manager.cpython-39.pyc | Bin 5581 -> 0 bytes .../__pycache__/tool_manager.cpython-39.pyc | Bin 1155 -> 0 bytes 3 files changed, 0 insertions(+), 0 deletions(-) delete mode 100644 tool_maker/__pycache__/assistant_manager.cpython-39.pyc delete mode 100644 tool_maker/__pycache__/chat_manager.cpython-39.pyc delete mode 100644 tool_maker/__pycache__/tool_manager.cpython-39.pyc diff --git a/tool_maker/__pycache__/assistant_manager.cpython-39.pyc b/tool_maker/__pycache__/assistant_manager.cpython-39.pyc deleted file mode 100644 index a4a2c9bfb8fed20823c1add803bdb849a4c02861..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 4322 zcmb_f-EQ2*73Pp!?rOD?e;OB6>ZVhrPP?h4*hSDE!*F9XX&?uY-6}xqtpZ}inU%QW zlJt-(WmV})xiyNSFCbm|A^He?0p9j1uaKg3zcb`gyKA9v&~m}yaA)Ss`T5S7k78-5 zZQ=UI-=C=8Ud5AMrY{FCAK=bXG}4kRvWBe381?PQ?%5w((vj|$mUK_;o}*mZ*td0r zQ7>u?y`ERiZ1$R}dDfE6bJkl>o?4Kt=d9OOj%v#V^cL0PS?Ap9EvaR-bV~B=Evwu9 z3cFfeRxEy&B+;iL7Q0Gs2K49jW#i=o+}SVCD67Y$)w3n*IntKSmsZb}Pz%q7^kfrH zPqyR&o=w@7i+HwV2b(PwS083snB^kQtF6S}Ao@>jj%zg*L$!L3ul7fAkcUa^YxQiT zvV8S+IWAQe=rE-rnpTZ+5pyxh_YxiEW1c5`S3^7>CnGL`KxG;ICwQz5`9$zUU@A1W z%$3d{uT;V)l8;lPNU}3f!DM?glIkQ?IuF&{ic0kh88dRST!-;)wJVL#wK``GiQq|} z=X>fxQDZpGJ}G^0$-@6-PxVC#1C^{d|NJ?V;^Ljd}O@AL|Jv5T_hoSzS-~-1JV$HlW;?T$Ic$+`q=1{RYi3JGOkJ zGFp}+W2Aj*|M}*LojYglv3*XXjo)M3#qZQ%R($m?lI)Jf%_!wvu8E_U*_4uu|W zU#Pf!A$(XCmGyL7w0u8|!`$~dWM z&0Ik>qe6r?Hn0b9?z#1$b;=HyzI|++HO{RQ7QbH>srfl4_T-;|cT9aF4-&oS?1Y&B zHjf#C(ER$r?-Wv59UxIPH~v?q{7k%D!Wbp1L~96KSG5dq%*TaGNj_r_fro6`!kt}5 zvkMzOZ_~a5>o4{*cdHD*3#PAO^o3p6>vyz=u8B+8EaSCk-;YpPgJhUK_%YrylI*dg zpT6SPU6uQFbk^@jNg$%^0XmcAuP8_kHwu0mosbkVR?6x$WvlVoAkP}6_9?uGwtSfF(`uG>wLx)T~ zsc7jcy2ig(3UgLdc&WS@9H+j-Hc8)1` z26Mtg^;$lk2|N|9CT>%VAEA?5 zXE^1~oZKBW&e=KJC8$~hZ_pgH1`Fu5sZ^V%T|J%Z>Eg*`<9UOC2nc$cMPryq6%~t9 zM5*Y6F*wG^@Waehn{Gq@5L@e8)KKb`HK;*Zo^=;Bp|iJQa=x#V;RTY*Q(e<3R5a*B z%rqHP+^9In#Y|c}7cv~Nbo?q)bS9RI#C~me57NZ#1SYV_N``3idZR9gU&!*30=96GNkVc5%FAo0oAke141jF zi)h~*=a^O6Mr$w$pBX9{WgH3-AX$4!B=LDdbR*5wNG4`d6(ay}zg(Y&>wKecR5nfuBB8^6 z7zm?8?J_JtlqyC-Hqnl^L7mNKmLV_3E)x;Mu4{gN7{br6(x9G}3S!(B(n!MtkQ`az zME?LFL|&NGtg{j&Y_9UXOVeyq*m}#P&P92jPoh;!{R8f-gT`WQX4CIqUgPMU%M%CU zX3mSv?gBAT;$XgCw0$4-WfW1r!LR9@mbwK78WPq1I%5vv!Weve-4Br80We-$6}JP3mSyA%5Vk% diff --git a/tool_maker/__pycache__/chat_manager.cpython-39.pyc b/tool_maker/__pycache__/chat_manager.cpython-39.pyc deleted file mode 100644 index 8815af3b10d0dae933191d247204aa0f5f60bb54..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 5581 zcmcIo&668P6`!6jjb{$(K zTxU1zpcJqjt>bjcyY8OWnc~`#w!<2g{qpK=ZI4Q(-_p3rty>zmcJ#&!*Dq^z`%BEI z)mdi0+Yh317gvy+x)R6Tcl^j-3q*^mCw~?)ui=p$h(K#Ft~K<);Os8rIyY{a4Ktsl zVR4(6ZfOmhJG_i{DR6=^J>0!(@+n?HTbEaP4Rup|n$O@}In-vB&!J6~&qL9LZ0h{F zpN!Qag0-wT54gZoR#6s z`Ezf-eR1V2@6GpC&R@Cs?#giXQJYvEmP8;2-Q@H=y-`tkZu#AAGvt{a4-y>gefHsf zcKSXW)vKA+7h#lSPGLZq8TmaRZ4GEj5|Nd|UOyH|H(brCd9vAy`2b)vJ2H+ks~Z#C z%Dz||^Z-qn>6obe#?4mHClk%gWV?^XsF(GULSbf-K8J`!H|7 zHxMEgkD}lyF^HPQx(T$0hB{7oI0>OKhw196G5vvQ9%qZJrkiZ>!AbKln!k3ux|z*m zj{?z#&uIEmhBEQ7XEO(dxdCMhg&ux)qTTF7Y2L7FN=GHwn%yZ; zutAx-rlqAnu#dG*8Mki(-I4cLJ;s}CF|MELR65nv>bObmW6Z`&+_`NqvSb>4sxzI} zre??Gh!?kwk%PLK)JZKqg_+7}d3Tmq_86}&X-L(OnqJaI?!kISQ^{Oses=-=j_t8K zOfDpIyNh{_X~`3mf}PByQ;@S?n0#(W|J{jEC9UipPu)FO?fbAB6aO7(7R|kgeUvMw zG^YMnA+et>pXUp(%PQ>hP1xnq9k%@!SY~Z(nJzz;oJecRG8fZY=Sj@<1T1shM_Fd= zqnffzyp?6@X|+wE`S^+;(21uY4mm(_O?0@I@yG=T9E0{jWkd&zFz~5H6#sk?TOMcW4+%sK(u@(3FXMaeeQ?xuK_tC) zEWB1M6Hoe^=(g;g>&h6T`q&o?1|C2oi$h&|1GHZaJRXXmm2|f~_>jn7?FODNJ>TnZ zd+iVzjI&BX+}jG1bqs|LLEGC1wzp!zp&a*k>_u_n^+miHa`cY2bKT+rT(!5g9=6uK zP^vaKX!5ZER}0&WL+-6Y%Y3OAT5iiE=q-C!;B5m8so0YDvYKWqjCi~y?-}B!w1q#0 z$SPqZ6EP^bRAyienVoO;uzdac^?ZaciNFjEd#R9l8RD_eiXj4nRgKclmm8h2jpGfpvj24I-Jj5^)Rgr?pwK-wl&Yk7Z^OQOxY%CZI0`IpoahZQwLB zGX}NJ+$|9%K{t#7DZWkPK1<>|5cTPtOg%>h%OqYPaf-x?5Scxpdt;xPA|^nE#(c3t z%8$d`L8E3qA^DKZBmjx;(Xh{xP*ms@N-6U8#D6jCI$FqIKxmGs*FfE7bn}5@x{$Wz zGLyLv9QCd|uH!+qNJPDs6Cy9VW zG>pjd@XX~wzYhacg(8WL3z(?^A+f`iRg7(3sk(1ykQiqO$_%8A0JjX5c z`3U1y5&!iRd>AaizHMCt;~14AH`R7)DL6A&L1}k7EmQ2i@%oUsNTG?N00B+X5-)*G zFvRQSJd!6a=|!_qYW#M_?-ISQYLO1cOh9usbonopDnlpq;4Zbm1(P+k37E}vW>Z@C z6ks;TD*)KMk_(+<&{9v!D#BM+#1(9#cn_j6O;HrSUp9T^KZj*vwTK?4y~a#o<3*^& zCG9!td4#v98{(H#MOYit^*ln!hTZnz0XS~rH~W~0H_Qtc-?$*&r{)Tki{e+5rWieO zc_K)Pt0c%X?kmd2iZn?s>Dox2Q-~BNr5z%hMH{DMQX7B})SO!YvxT~Hijv$o#-;(9zu(8D9soWY=@Vt} zld8;2N_2VLpNYmX+vE#zLA2?*b=p(RNt}kV!@vE%NYd3Y*#d3TVn6*rB->ux_KMbp@hpooSO-i8tT_2I#*)N)wNf5c?R^Z6 zIN4E%0hBj|o1$j1x1TnPFc zlACM!ToDE{`-=K5l-n}J6`|IDol?$WLF7VE%qH?JJqVbN=~#~8EGQFW`u)mTXWIF} zIQNxmU7yA01$YNP%oW0SQ%(FFBFn!)j6b9)LMXmTm5NZQZ$58O_Q0QT${u_N3Mfko b9i?gJoXNjOoF$Ja>CE9D@Q2iovpM5GNc(&X diff --git a/tool_maker/__pycache__/tool_manager.cpython-39.pyc b/tool_maker/__pycache__/tool_manager.cpython-39.pyc deleted file mode 100644 index 29f71ea6ece1e5089c0cbc662cb3a80ed8a38132..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 1155 zcmZ`&&2AGh5VpNevfZ}gKo1lyRs=|qP!ZP%0acL@NKKK_QxwT^*G`*~{i(gHs!MWe z-vU*lM;?Y3*efSq0inXorb!nBUU@t}p7H1T)@pq{ATYjteQyptSkb5t@R3(A-F*<6 zXgVPS@`h-py$hnfb2gxw_DP3-hdAklDC=XH{}8-}z-}^N#?w@@3vx~e9CT0fN^j@} zRBD#1`$;m{Rk1oWHl(#`40H~22c{DsG|k9oIwe>16F|E3W@xi1y`nQZMOa?3lc^8J z2_@e3_jO+0x(fN`CB;xNPV=N?qBQBucvuE1PpT1CGaiA+`lq z`tfqq9rlvUxZ&=Z=%*?)!#O;;eQ4rL_D_|KhL|BExSCz4RlBy*vvRYFjcgLh1t2*; zsDXW}`1Tm{uYDfS;?e$Xv$y`6`3DVH2BB6oyar`K0ybUKO)>)tG=tTH)q|C5K4m~r zW9oryPC3ZdWoyQ(On!w_`Nh-yIaNRj-c(rQ(j<1KqHC1c=r0;e9;?WdtJ=8Gj#FgA zHUUm?w?-FkX?I1tcu;4mmQn>D*Mq6DkdztgAhu)?`7eT{IqNWsKexqhbQ%MHfp)O5 z?7B54xpA3H4vsCh-L9fQh!xdy;e%Oj%n6beYl}*EAmZ+UAYMTI>SL7s;*4E$&VKsM zLe$CoSBV}xCtpfzf|O+-Wt8ZAg8G`2C%Ky3MAlrUvT+EvK1#G*fg|l23Z%Iv5UX9h jLWh=584Xq#t-Ont;Eq~WRiwJ6|9pP_3w+fMxB1?0>b)M1 From 8eacafb3cd38f637c8e240441f4ab0160ba742b1 Mon Sep 17 00:00:00 2001 From: RomanGoEmpire <71276897+RomanGoEmpire@users.noreply.github.com> Date: Sun, 12 Nov 2023 14:40:42 +0100 Subject: [PATCH 054/141] synced chat_manager.py changes synced code_of_conduct --- agents/tool_maker/chat_manager.py | 6 +- code_of_conduct.md | 489 ++++++++++++++++++++++++++++++ 2 files changed, 494 insertions(+), 1 deletion(-) create mode 100644 code_of_conduct.md diff --git a/agents/tool_maker/chat_manager.py b/agents/tool_maker/chat_manager.py index d1e938d..c3bfb3e 100644 --- a/agents/tool_maker/chat_manager.py +++ b/agents/tool_maker/chat_manager.py @@ -1,5 +1,6 @@ from openai import OpenAI import importlib +import os from tool_manager import ToolManager import json @@ -10,6 +11,7 @@ class ChatManager: def __init__(self, client: OpenAI): self.client = client + self.functions_path = "tool_maker/python_functions" def create_thread_from_user_input(self): return self.client.beta.threads.create( @@ -82,7 +84,9 @@ def handle_fucntion_request( ) function_lines = functional_response.split("```python")[1].split("```")[0] name = tool["function"]["name"] - with open(f"tool_maker/python_functions/{name}.py", "w") as file: + if not os.path.exists(self.functions_path): + os.mkdir(self.functions_path) + with open(f"{self.functions_path}/{name}.py", "w") as file: file.writelines(function_lines) response = {"tool_call_id": call.id, "output": "{success}"} diff --git a/code_of_conduct.md b/code_of_conduct.md new file mode 100644 index 0000000..b77f675 --- /dev/null +++ b/code_of_conduct.md @@ -0,0 +1,489 @@ +# HAAS Code of Conduct + +## TLDR + +**Maximize Signal-to-Noise Ratio** + +1. **DON'T WASTE TIME:** Before you post, commit, or comment, make sure that your contribution is not going to waste anyone's time. Low effort and low value contributions will be removed. Repeat offenses will result in a ban. This reduces NOISE. +2. **ADD VALUE:** You should be seeking to maximize value added to the collective. Optimize your contributions to be succint, impactful, meaningful, and helpful. This boosts SIGNAL. +3. **DO NO HARM:** Zero tolerance for insults, trolling, moving the goalposts, and frivolous debates. Reduces noise and removes disruptors. + +For more information, check out the original C3P0 below. Participation in this project and community is a privilege, not a right. Act accordingly. + +## Don't Waste Time + +Time is our most scarce resource and is a proxy for everything else important; cognitive energy, and so on. Optimize your contributions to be a good use of time. That means you should expend your time wisely, and ensure that the time spent reading your contribution is also a good use of time. + +Good Uses of Time: +- High quality posts that are well thought out, well structured, easy to read, and not gigantic walls of text. LESS IS MORE. +- Demonstrations that quickly and succinctly convey the most information in the least amount of time. +- Examples that quickly convey how and why things work + +Bad Uses of Time: +- Debating, arguing, and quibbling over details or superfluous topics +- Moving the goalposts, changing the scope, or naysaying +- Walls of text, personal opinions, irrelevant contributions + +## Add Value + +The key here is to optimize value-add. If you don't have something particularly valuable to add, don't add anything. But, if you do have something valuable to add, please do so immediately and in an manner optimized for best use of time. + +Valuable Additions: +- Solve problems +- Add code +- Share resources +- Increase collective's understanding + +## Do No Harm + +This is boilerplate anti-trolling, flaming, and griefing. This behavior will result in an immediate irrevocable ban. Removing dirsuptors results in better signal to noise ratio. + + +## C3P0 +https://github.com/daveshap/C3P0 + +```markdown +Collaborative Culture Community Policy: Zero Tolerance + + + +I. Preamble: Understanding the Present + +The Collaborative Culture Community Policy Zero Tolerance (C3P0) arises +from a critical understanding of contemporary digital culture. We +recognize the pervasive influence of engagement-driven algorithms in our +online spaces, which often amplify and incentivize toxic competition, +harmful behaviors, and low-quality interactions. These algorithms, in +their relentless pursuit of engagement metrics, often prioritize volume +over value, controversy over collaboration, and outrage over +understanding. + + + +II. Vision: Imagining a Better Future + +The C3P0 puts forth a vision for online communities that depart from these +harmful norms and instead, place the well-being of their participants at +their core. We envision digital spaces where the time, energy, and +emotional well-being of participants are respected, where interactions are +thoughtful, where discussions are fruitful, and where the digital +experience is marked by empathy, respect, and understanding. We +acknowledge that this approach may result in lower engagement levels +quantitatively, but qualitatively, it promises to elevate the online +experience and create a more meaningful, inclusive, and enjoyable digital +landscape. + + + +III. Intent: Shaping the Change + +To bring this vision into reality, the C3P0 sets out a clear policy and +practical guidelines designed to nurture a healthier digital culture. The +policy underscores the crucial shift away from metrics-driven engagement +towards a values-driven engagement that prioritizes collaborative and +constructive participation. It establishes clear expectations, upholds +transparency, and enforces accountability. In doing so, we intend to +challenge the status quo, encourage digital platforms and creators to +adopt these principles, and ultimately, play our part in fixing the +internet, one digital community at a time. + + + +IV. Potential Impact: An Unexpected Upside + +While it's easy to assume that enforcing stricter community guidelines may +suppress activity and thereby reduce engagement, historical and +sociological patterns suggest otherwise. Just as the ban on smoking in +public spaces, which was initially perceived as detrimental to businesses +such as bars and restaurants, eventually increased their patronage, we +believe a similar pattern may emerge in online communities. + +By instituting these guidelines and pushing harmful behaviors to the +fringes, we may be able to create a more inviting environment. When online +spaces become more respectful, more inclusive, and more value-driven, they +also become more appealing to a broader audience. + +In essence, this is an appeal to quality over quantity: a smaller number +of high-quality, respectful interactions over a large volume of toxic +engagements. Yet, it's not only the quality that might improve - the +quantity might as well. + +By creating safer spaces online, we may actually foster deeper, more +meaningful interactions. We could attract more diverse voices, +perspectives, and ideas, which would otherwise be pushed out by toxic +behaviors. And so, paradoxically, by striving to create a more wholesome, +respectful environment - even if it means enforcing stricter guidelines +and seeing some decline in immediate engagement - we might actually drive +up engagement in the long run. + +In this way, the C3P0 isn't just an ethical and moral approach to digital +community management - it could also be a smarter, more sustainable +approach to fostering vibrant, engaging, and diverse online communities. +It's a win-win for everyone: a healthier, more enjoyable experience for +users, and a more engaged, diverse, and respectful community for platforms +and creators. + + + +V. Theory and Principles + +To embody our values and vision, we have identified several guiding +principles which serve as the backbone of our policy: + +A. Privilege of Participation + Engagement in our online community is a privilege, not a right. This + distinction underscores the need for every member to contribute positively + and constructively. Our community is not an inalienable public square but + a collective space built on shared norms and respect. It invites + individuals who uphold these values, enrich discussions, and respect + fellow participants. Those unable or unwilling to abide by these standards + may forfeit the privilege of participating in our community. + +B. Right to Consent + Every member of our community possesses the fundamental right to consent + to how they're treated. This principle asserts that one's online presence + doesn't imply an open invitation for harassment, abuse, or unwanted + attention. It repudiates the notion that by merely being online, + individuals are 'asking for' or 'exposing themselves to' potential harm. + We respect and uphold the rights of our members to engage in a manner that + they feel comfortable with, putting their safety and well-being over + superficial metrics of engagement. + +C. Time Vampirism + Time vampirism refers to behaviors that drain the collective energy and + time of our community. These behaviors, such as trolling, engaging in + baseless arguments, and continually shifting the goalposts, detract from + meaningful engagement and waste the valuable time of our members. We + consider time a precious commodity and take a strong stance against + practices that misuse or squander it. + +D. Collaborative Culture + We foster a culture of collaboration rather than competition. Many online + platforms promote competition, driving engagement through conflict and + outrage. This often results in divisive, harmful spaces that discourage + genuine discourse. In contrast, our community encourages collective + growth, mutual learning, and supportive interactions. We believe that a + collaborative environment, where members uplift and learn from each other, + leads to richer, more positive experiences. + +E. Constructive Participation + Constructive participation is the cornerstone of our community. We expect + all members to contribute in a manner that is thoughtful, respectful, and + aligned with the community's purpose. This involves listening to others, + providing insightful input, staying on-topic, and treating all + participants with kindness. Constructive participation recognizes and + appreciates the diversity of our community, fostering an environment where + different perspectives can be shared and valued without fear of hostility + or derision. + +F. Accountability + Accountability is central to the functioning of a healthy, respectful, and + collaborative online community. Existing algorithms and policies on many + online platforms often fall short in this regard, allowing harmful + behaviors to persist under the guise of 'free speech' or engagement. This + approach not only undermines the quality of interactions but can also have + severe mental health repercussions for users exposed to such behaviors. + Our policy asserts that accountability should be paramount in any online + community. Members should take responsibility for their actions, comments, + and the effect they have on others. + +G. Consequences + Just as in real life, there must be consequences for actions that harm or + disrupt the community. All too often, online platforms allow harmful + behaviors to go unchecked, breeding a toxic environment that discourages + meaningful engagement and collaboration. We stand firm against this trend. + Actions that violate the standards and guidelines of our community, such + as bullying, harassment, or any form of harm, must be met with clear, + swift, and appropriate consequences. These may range from warnings and + temporary suspensions to permanent bans, depending on the severity and + frequency of the violation. By enforcing consequences, we aim to create an + online environment that mirrors the respect, responsibility, and + accountability expected in our everyday offline interactions. + +The C3P0 is a bold step towards fostering a healthier, more supportive, +and collaborative internet culture. By implementing this policy, we aspire +to redirect the internet away from harmful practices and towards a +community that values respect, collaboration, and constructive +participation above all. + + + +VI. The Golden Rule: Don't Waste Time + +Time is the most precious commodity we possess, and within the context of +our policy, it becomes the 'new algorithm,' the key metric guiding our +community interactions. Our golden rule enshrines this concept: Don't +waste anyone's time. + +This principle acknowledges that every second spent within our community +is valuable 'signal.' It reflects our commitment to maximize the +'signal-to-noise' ratio in our community interactions. By 'signal,' we +mean meaningful, valuable contributions that enrich discussions and foster +a sense of community. 'Noise,' on the other hand, encompasses behaviors +and content that waste time, distract from meaningful engagement, or +disrupt the constructive community environment. + +This golden rule is inherently subjective and contextual, reflecting the +varied nature of discussions and dynamics across diverse internet +communities. What might be considered 'noise' or time-wasting in one +context might be acceptable in another. For instance, a light-hearted meme +may be a distraction in a focused technical discussion, but in a casual +conversation, it might foster a sense of community and fun. + +Our golden rule encourages every member to ponder before posting: Does +this contribution act as a valuable signal? Does it respect the time of +others and align with the community's purpose? This perspective flips the +conventional understanding of engagement on its head, with time and +'signal' quality becoming the critical factors over mere quantity. + +We trust our community members to understand and exercise this principle +effectively. We are not seeking to micromanage, but rather to foster a +culture where time, as the new algorithm, is respected and valued. By +promoting a high signal-to-noise ratio, we aim to nurture a collaborative +environment marked by meaningful engagement, mutual respect, and valuable +contributions. + + + +VII. Time-Wasting Behaviors: Reducing Noise + +The C3P0 defines certain behaviors as 'noise' within our community, which +tend to detract from the quality of interactions and waste the valuable +time of our members. Here, we offer a non-exhaustive list of behaviors +that are generally considered 'time-wasting'. Understanding these +behaviors can help our community members better adhere to our golden rule: +Don't waste anyone's time. + +A. Trolling + Trolling refers to intentionally disruptive actions aimed at provoking + negative reactions, derailing discussions, or sowing discord within the + community. This behavior serves no constructive purpose and wastes the + community's time by redirecting attention away from valuable discourse. + +B. Baseless Arguing + Engaging in arguments without any substantial basis or evidence, often for + the sake of arguing, is another form of time-wasting behavior. This not + only detracts from meaningful discussions but also creates a hostile + environment that discourages constructive participation. + +C. Shifting the Goalposts + This behavior involves continually changing the criteria or standards in a + discussion once they have been met. It results in endless debates that + waste time and stifle the productive exchange of ideas. It also includes + 'whataboutism' and other red herrings. + +D. Armchair Debating + Participating in debates about complex subjects without appropriate + knowledge, understanding, or consideration of expert opinions is often + unproductive and may mislead other community members, thus wasting their + time. This also includes 'oneupmanship' and sophistry. + +E. Disingenuous Behaviors + Any form of dishonesty, such as misrepresentation of facts, misleading + other members, or feigning ignorance to provoke a reaction, is considered + time-wasting. Authenticity and honesty are essential in creating a + community built on trust and mutual respect. + +F. Harassing Behaviors + Any actions that involve persistent unwanted attention, bullying, or + infringement on another member's right to consent are strictly considered + time-wasting and disrespectful. Our community places a high value on the + emotional well-being of all members, and harassment of any form will not + be tolerated. + +By clearly identifying these behaviors, we aim to promote self-awareness +among our members. We expect everyone in our community to refrain from +these time-wasting behaviors, to contribute positively to the +signal-to-noise ratio, and to respect the golden rule. We hope this +contributes to a collaborative, respectful, and engaging environment where +each interaction is a good use of everyone's time. + + + +VIII. Good Uses of Time: Amplifying Signal + +Following our Golden Rule, we want to emphasize behaviors that contribute +positively to our community's signal-to-noise ratio. These behaviors, or +'good uses of time,' are actively encouraged as they align with our vision +of cultivating a respectful, collaborative, and engaging online +environment. Again, this list isn't exhaustive but serves as an +illustration of potentially beneficial behaviors: + +A. Thoughtful Participation + Taking the time to craft well-thought-out responses, comments, or posts + that contribute to the topic at hand is highly valued. Thoughtful + participation fosters meaningful discussions and is a respectful use of + everyone's time. + +B. Active Listening + Active listening involves engaging with others' ideas, showing + understanding, and responding constructively. This behavior demonstrates + respect for others' time and effort in sharing their thoughts and fosters + an environment of mutual learning. + +C. Respectful Disagreement + Disagreements are inevitable in any community, but it's important to + handle them respectfully. Expressing disagreement in a thoughtful, + respectful manner that focuses on the idea rather than the person is a + productive use of time and enriches discussions. + +D. Asking Insightful Questions + Asking insightful questions can stimulate discussion, encourage deeper + thought, and promote mutual learning. These questions are often open-ended + and invite others to share their perspectives, experiences, or expertise. + +E. Sharing Knowledge + Sharing relevant information, expertise, or experiences that contribute to + a discussion is highly encouraged. It adds value to conversations and is a + good use of everyone's time. + +F. Constructive Feedback + Providing constructive feedback helps others improve, fosters mutual + growth, and strengthens the community. Remember to focus on the behavior + or the idea, not the person, and to communicate your feedback in a + respectful, supportive manner. + +By promoting these behaviors, we aim to cultivate a community environment +that values quality over quantity, respects each other's time, and fosters +meaningful, engaging interactions. We encourage all members to practice +these behaviors and contribute positively to the community's +signal-to-noise ratio. In this way, every interaction within our community +becomes a valuable signal, and a respectful use of everyone's time. + + + +IX. Consequences: Upholding Accountability + +Consequences for time-wasting or harmful behavior serve to uphold +accountability and maintain the respect, safety, and integrity of our +community. It is important to note that the capacity to enforce certain +consequences will depend on the specific capabilities of various digital +platforms. While we acknowledge this variability, the following is a +general guideline for understanding the potential consequences of +violating our policy. Again, this list is not exhaustive but offers a +range of possible actions: + +A. Warning + Initial minor offenses might result in a warning. This serves as an + opportunity for the offender to acknowledge their misstep and correct + their behavior. + +B. Temporary Suspension or Timeout + Repeated offenses or more severe misbehavior may result in temporary + suspension or a timeout. This punitive measure offers the offender a + period of reflection and the chance to reconsider their actions. + +C. Permanent Ban + In cases of extremely disruptive behavior or in the event of serious, + repeat offenses, a permanent ban may be enforced. This ensures the safety + and well-being of the rest of the community. + +D. Removal of Content + Certain offenses may necessitate the removal of the offender's content. + This can range from a single inappropriate comment to an entire thread, + depending on the severity of the violation. + +While the application of these consequences may vary from platform to +platform, the core principle remains the same: enforcing accountability +for harmful behaviors. We recognize that not all platforms allow for the +full range of these consequences, often providing only more extreme +measures like blocking or banning. In such cases, we encourage communities +to be judicious but firm in their enforcement. + +It's crucial to emphasize that the goal of these consequences isn't to +punish but to uphold the integrity, safety, and respect of our online +community. The same principle applies in various offline scenarios: flight +attendants wouldn't hesitate to enforce smoking prohibitions on a plane, +or if someone lit up a cigarette in a restaurant, they would be asked to +leave. These actions aren't seen as punishments, but as necessary measures +to ensure the safety and comfort of everyone else present. + +Likewise, the C3P0 policy sees the enforcement of consequences as a +commitment to creating a safe, respectful, and collaborative online +community. One person's noxious behavior can make the digital environment +unsafe and unpleasant for everyone else. By implementing consequences, +we're not punishing individuals but safeguarding the collective wellbeing +of our community. In this way, we are making the internet a better place +for everyone. + + + +X. Guidelines for Moderators: How to Use C3P0 Fairly + +The role of a moderator in implementing the C3P0 policy is crucial. It is +a challenging role that requires sensitivity, discernment, and a deep +understanding of our shared values and principles. The following +guidelines are designed to assist moderators in their role, ensuring that +they uphold the spirit of our policy while also encouraging lively, +constructive participation. + +A. Balance + The goal of our policy is not to suppress discussion or create + an environment of fear. While we adopt a Zero Tolerance stance towards + harmful behaviors, it is equally important to balance this with an + encouraging, inclusive atmosphere for genuine, well-meaning participants. + It's critical to foster an environment where users feel free to express + themselves, debate, and share ideas without fear of undue reprisal. The + aim should be to strike a balance where all users feel safe and heard, but + not silenced. + +B. Transparency + Being transparent about the rules, decisions, and actions + taken is crucial for fostering trust within the community. When enforcing + consequences, explain the reason clearly, citing the specific violation of + the policy. This clarity will not only help the individual understand + their misstep but also serve as a learning opportunity for the rest of the + community. Openly communicate any changes or updates to the community + guidelines, and provide reasons behind these modifications. Additionally, + consider creating a publicly accessible log of moderation actions (while + maintaining user privacy), which can demonstrate your commitment to + fairness and accountability. + +C. Consistency and Fairness + Treat all members of the community with equal + respect and fairness, regardless of their status or popularity. Ensure + that the policy is applied consistently to everyone. Avoid showing + favoritism or bias, as this can damage the trust and harmony within the + community. For instance, a new user violating the guidelines should + receive the same treatment as a seasoned member. In cases of rule + violation, communicate clearly about the infringement, the relevant + section of the policy it contravenes, and the subsequent action taken. By + doing so, you demonstrate transparency and uphold the principle of + fairness. + +D. Proactive Engagement + Anticipate potential issues and respond to them + before they escalate. This could involve addressing emerging conflicts, + clarifying misunderstandings, or reiterating community guidelines as + necessary. Being proactive also means guiding discussions constructively + to prevent them from spiraling into negativity or toxicity. For instance, + if you observe a conversation heating up, consider stepping in with a + reminder about respectful dialogue or steering the conversation back on + track. This proactive approach can maintain a positive environment and + prevent the need for punitive measures. + +E. Diplomacy and Empathy + The essence of moderation is not in the exercise + of power, but in diplomacy and empathy. When enforcing guidelines, + approach the situation with understanding and tact. Aim to guide rather + than chastise, keeping in mind that your goal is to foster a respectful, + constructive community. Before taking action, consider the context, the + user's history, and the potential for misunderstanding. If possible, + privately communicate with the user in question to address the issue, + explaining the violation and the necessity for the guideline. This + diplomatic approach can help resolve issues without resorting to public + penalties, which should be used as a last resort. + +Always remember that there is a person behind each username, with their +own experiences, perspectives, and feelings. Strive to foster a supportive +and understanding atmosphere where everyone feels respected and heard. +While firmness is necessary to maintain order and respect, it should +always be balanced with empathy and respect for individual dignity. + +The goal of the C3P0 is not just to penalize harmful behavior but to +actively encourage a positive, respectful, and collaborative culture. As a +moderator, your role is pivotal in shaping the tone and culture of the +community. Your fair and balanced approach to implementing this policy +will be key in creating an online space where everyone feels valued, +respected, and free to contribute. +``` From c4e31bfc11ec583381ff630f372e624de1417e38 Mon Sep 17 00:00:00 2001 From: RomanGoEmpire <71276897+RomanGoEmpire@users.noreply.github.com> Date: Sun, 12 Nov 2023 14:40:42 +0100 Subject: [PATCH 055/141] synced chat_manager.py changes synced code_of_conduct --- agents/tool_maker/chat_manager.py | 6 +- code_of_conduct.md | 489 ++++++++++++++++++++++++++++++ 2 files changed, 494 insertions(+), 1 deletion(-) create mode 100644 code_of_conduct.md diff --git a/agents/tool_maker/chat_manager.py b/agents/tool_maker/chat_manager.py index d1e938d..c3bfb3e 100644 --- a/agents/tool_maker/chat_manager.py +++ b/agents/tool_maker/chat_manager.py @@ -1,5 +1,6 @@ from openai import OpenAI import importlib +import os from tool_manager import ToolManager import json @@ -10,6 +11,7 @@ class ChatManager: def __init__(self, client: OpenAI): self.client = client + self.functions_path = "tool_maker/python_functions" def create_thread_from_user_input(self): return self.client.beta.threads.create( @@ -82,7 +84,9 @@ def handle_fucntion_request( ) function_lines = functional_response.split("```python")[1].split("```")[0] name = tool["function"]["name"] - with open(f"tool_maker/python_functions/{name}.py", "w") as file: + if not os.path.exists(self.functions_path): + os.mkdir(self.functions_path) + with open(f"{self.functions_path}/{name}.py", "w") as file: file.writelines(function_lines) response = {"tool_call_id": call.id, "output": "{success}"} diff --git a/code_of_conduct.md b/code_of_conduct.md new file mode 100644 index 0000000..b77f675 --- /dev/null +++ b/code_of_conduct.md @@ -0,0 +1,489 @@ +# HAAS Code of Conduct + +## TLDR + +**Maximize Signal-to-Noise Ratio** + +1. **DON'T WASTE TIME:** Before you post, commit, or comment, make sure that your contribution is not going to waste anyone's time. Low effort and low value contributions will be removed. Repeat offenses will result in a ban. This reduces NOISE. +2. **ADD VALUE:** You should be seeking to maximize value added to the collective. Optimize your contributions to be succint, impactful, meaningful, and helpful. This boosts SIGNAL. +3. **DO NO HARM:** Zero tolerance for insults, trolling, moving the goalposts, and frivolous debates. Reduces noise and removes disruptors. + +For more information, check out the original C3P0 below. Participation in this project and community is a privilege, not a right. Act accordingly. + +## Don't Waste Time + +Time is our most scarce resource and is a proxy for everything else important; cognitive energy, and so on. Optimize your contributions to be a good use of time. That means you should expend your time wisely, and ensure that the time spent reading your contribution is also a good use of time. + +Good Uses of Time: +- High quality posts that are well thought out, well structured, easy to read, and not gigantic walls of text. LESS IS MORE. +- Demonstrations that quickly and succinctly convey the most information in the least amount of time. +- Examples that quickly convey how and why things work + +Bad Uses of Time: +- Debating, arguing, and quibbling over details or superfluous topics +- Moving the goalposts, changing the scope, or naysaying +- Walls of text, personal opinions, irrelevant contributions + +## Add Value + +The key here is to optimize value-add. If you don't have something particularly valuable to add, don't add anything. But, if you do have something valuable to add, please do so immediately and in an manner optimized for best use of time. + +Valuable Additions: +- Solve problems +- Add code +- Share resources +- Increase collective's understanding + +## Do No Harm + +This is boilerplate anti-trolling, flaming, and griefing. This behavior will result in an immediate irrevocable ban. Removing dirsuptors results in better signal to noise ratio. + + +## C3P0 +https://github.com/daveshap/C3P0 + +```markdown +Collaborative Culture Community Policy: Zero Tolerance + + + +I. Preamble: Understanding the Present + +The Collaborative Culture Community Policy Zero Tolerance (C3P0) arises +from a critical understanding of contemporary digital culture. We +recognize the pervasive influence of engagement-driven algorithms in our +online spaces, which often amplify and incentivize toxic competition, +harmful behaviors, and low-quality interactions. These algorithms, in +their relentless pursuit of engagement metrics, often prioritize volume +over value, controversy over collaboration, and outrage over +understanding. + + + +II. Vision: Imagining a Better Future + +The C3P0 puts forth a vision for online communities that depart from these +harmful norms and instead, place the well-being of their participants at +their core. We envision digital spaces where the time, energy, and +emotional well-being of participants are respected, where interactions are +thoughtful, where discussions are fruitful, and where the digital +experience is marked by empathy, respect, and understanding. We +acknowledge that this approach may result in lower engagement levels +quantitatively, but qualitatively, it promises to elevate the online +experience and create a more meaningful, inclusive, and enjoyable digital +landscape. + + + +III. Intent: Shaping the Change + +To bring this vision into reality, the C3P0 sets out a clear policy and +practical guidelines designed to nurture a healthier digital culture. The +policy underscores the crucial shift away from metrics-driven engagement +towards a values-driven engagement that prioritizes collaborative and +constructive participation. It establishes clear expectations, upholds +transparency, and enforces accountability. In doing so, we intend to +challenge the status quo, encourage digital platforms and creators to +adopt these principles, and ultimately, play our part in fixing the +internet, one digital community at a time. + + + +IV. Potential Impact: An Unexpected Upside + +While it's easy to assume that enforcing stricter community guidelines may +suppress activity and thereby reduce engagement, historical and +sociological patterns suggest otherwise. Just as the ban on smoking in +public spaces, which was initially perceived as detrimental to businesses +such as bars and restaurants, eventually increased their patronage, we +believe a similar pattern may emerge in online communities. + +By instituting these guidelines and pushing harmful behaviors to the +fringes, we may be able to create a more inviting environment. When online +spaces become more respectful, more inclusive, and more value-driven, they +also become more appealing to a broader audience. + +In essence, this is an appeal to quality over quantity: a smaller number +of high-quality, respectful interactions over a large volume of toxic +engagements. Yet, it's not only the quality that might improve - the +quantity might as well. + +By creating safer spaces online, we may actually foster deeper, more +meaningful interactions. We could attract more diverse voices, +perspectives, and ideas, which would otherwise be pushed out by toxic +behaviors. And so, paradoxically, by striving to create a more wholesome, +respectful environment - even if it means enforcing stricter guidelines +and seeing some decline in immediate engagement - we might actually drive +up engagement in the long run. + +In this way, the C3P0 isn't just an ethical and moral approach to digital +community management - it could also be a smarter, more sustainable +approach to fostering vibrant, engaging, and diverse online communities. +It's a win-win for everyone: a healthier, more enjoyable experience for +users, and a more engaged, diverse, and respectful community for platforms +and creators. + + + +V. Theory and Principles + +To embody our values and vision, we have identified several guiding +principles which serve as the backbone of our policy: + +A. Privilege of Participation + Engagement in our online community is a privilege, not a right. This + distinction underscores the need for every member to contribute positively + and constructively. Our community is not an inalienable public square but + a collective space built on shared norms and respect. It invites + individuals who uphold these values, enrich discussions, and respect + fellow participants. Those unable or unwilling to abide by these standards + may forfeit the privilege of participating in our community. + +B. Right to Consent + Every member of our community possesses the fundamental right to consent + to how they're treated. This principle asserts that one's online presence + doesn't imply an open invitation for harassment, abuse, or unwanted + attention. It repudiates the notion that by merely being online, + individuals are 'asking for' or 'exposing themselves to' potential harm. + We respect and uphold the rights of our members to engage in a manner that + they feel comfortable with, putting their safety and well-being over + superficial metrics of engagement. + +C. Time Vampirism + Time vampirism refers to behaviors that drain the collective energy and + time of our community. These behaviors, such as trolling, engaging in + baseless arguments, and continually shifting the goalposts, detract from + meaningful engagement and waste the valuable time of our members. We + consider time a precious commodity and take a strong stance against + practices that misuse or squander it. + +D. Collaborative Culture + We foster a culture of collaboration rather than competition. Many online + platforms promote competition, driving engagement through conflict and + outrage. This often results in divisive, harmful spaces that discourage + genuine discourse. In contrast, our community encourages collective + growth, mutual learning, and supportive interactions. We believe that a + collaborative environment, where members uplift and learn from each other, + leads to richer, more positive experiences. + +E. Constructive Participation + Constructive participation is the cornerstone of our community. We expect + all members to contribute in a manner that is thoughtful, respectful, and + aligned with the community's purpose. This involves listening to others, + providing insightful input, staying on-topic, and treating all + participants with kindness. Constructive participation recognizes and + appreciates the diversity of our community, fostering an environment where + different perspectives can be shared and valued without fear of hostility + or derision. + +F. Accountability + Accountability is central to the functioning of a healthy, respectful, and + collaborative online community. Existing algorithms and policies on many + online platforms often fall short in this regard, allowing harmful + behaviors to persist under the guise of 'free speech' or engagement. This + approach not only undermines the quality of interactions but can also have + severe mental health repercussions for users exposed to such behaviors. + Our policy asserts that accountability should be paramount in any online + community. Members should take responsibility for their actions, comments, + and the effect they have on others. + +G. Consequences + Just as in real life, there must be consequences for actions that harm or + disrupt the community. All too often, online platforms allow harmful + behaviors to go unchecked, breeding a toxic environment that discourages + meaningful engagement and collaboration. We stand firm against this trend. + Actions that violate the standards and guidelines of our community, such + as bullying, harassment, or any form of harm, must be met with clear, + swift, and appropriate consequences. These may range from warnings and + temporary suspensions to permanent bans, depending on the severity and + frequency of the violation. By enforcing consequences, we aim to create an + online environment that mirrors the respect, responsibility, and + accountability expected in our everyday offline interactions. + +The C3P0 is a bold step towards fostering a healthier, more supportive, +and collaborative internet culture. By implementing this policy, we aspire +to redirect the internet away from harmful practices and towards a +community that values respect, collaboration, and constructive +participation above all. + + + +VI. The Golden Rule: Don't Waste Time + +Time is the most precious commodity we possess, and within the context of +our policy, it becomes the 'new algorithm,' the key metric guiding our +community interactions. Our golden rule enshrines this concept: Don't +waste anyone's time. + +This principle acknowledges that every second spent within our community +is valuable 'signal.' It reflects our commitment to maximize the +'signal-to-noise' ratio in our community interactions. By 'signal,' we +mean meaningful, valuable contributions that enrich discussions and foster +a sense of community. 'Noise,' on the other hand, encompasses behaviors +and content that waste time, distract from meaningful engagement, or +disrupt the constructive community environment. + +This golden rule is inherently subjective and contextual, reflecting the +varied nature of discussions and dynamics across diverse internet +communities. What might be considered 'noise' or time-wasting in one +context might be acceptable in another. For instance, a light-hearted meme +may be a distraction in a focused technical discussion, but in a casual +conversation, it might foster a sense of community and fun. + +Our golden rule encourages every member to ponder before posting: Does +this contribution act as a valuable signal? Does it respect the time of +others and align with the community's purpose? This perspective flips the +conventional understanding of engagement on its head, with time and +'signal' quality becoming the critical factors over mere quantity. + +We trust our community members to understand and exercise this principle +effectively. We are not seeking to micromanage, but rather to foster a +culture where time, as the new algorithm, is respected and valued. By +promoting a high signal-to-noise ratio, we aim to nurture a collaborative +environment marked by meaningful engagement, mutual respect, and valuable +contributions. + + + +VII. Time-Wasting Behaviors: Reducing Noise + +The C3P0 defines certain behaviors as 'noise' within our community, which +tend to detract from the quality of interactions and waste the valuable +time of our members. Here, we offer a non-exhaustive list of behaviors +that are generally considered 'time-wasting'. Understanding these +behaviors can help our community members better adhere to our golden rule: +Don't waste anyone's time. + +A. Trolling + Trolling refers to intentionally disruptive actions aimed at provoking + negative reactions, derailing discussions, or sowing discord within the + community. This behavior serves no constructive purpose and wastes the + community's time by redirecting attention away from valuable discourse. + +B. Baseless Arguing + Engaging in arguments without any substantial basis or evidence, often for + the sake of arguing, is another form of time-wasting behavior. This not + only detracts from meaningful discussions but also creates a hostile + environment that discourages constructive participation. + +C. Shifting the Goalposts + This behavior involves continually changing the criteria or standards in a + discussion once they have been met. It results in endless debates that + waste time and stifle the productive exchange of ideas. It also includes + 'whataboutism' and other red herrings. + +D. Armchair Debating + Participating in debates about complex subjects without appropriate + knowledge, understanding, or consideration of expert opinions is often + unproductive and may mislead other community members, thus wasting their + time. This also includes 'oneupmanship' and sophistry. + +E. Disingenuous Behaviors + Any form of dishonesty, such as misrepresentation of facts, misleading + other members, or feigning ignorance to provoke a reaction, is considered + time-wasting. Authenticity and honesty are essential in creating a + community built on trust and mutual respect. + +F. Harassing Behaviors + Any actions that involve persistent unwanted attention, bullying, or + infringement on another member's right to consent are strictly considered + time-wasting and disrespectful. Our community places a high value on the + emotional well-being of all members, and harassment of any form will not + be tolerated. + +By clearly identifying these behaviors, we aim to promote self-awareness +among our members. We expect everyone in our community to refrain from +these time-wasting behaviors, to contribute positively to the +signal-to-noise ratio, and to respect the golden rule. We hope this +contributes to a collaborative, respectful, and engaging environment where +each interaction is a good use of everyone's time. + + + +VIII. Good Uses of Time: Amplifying Signal + +Following our Golden Rule, we want to emphasize behaviors that contribute +positively to our community's signal-to-noise ratio. These behaviors, or +'good uses of time,' are actively encouraged as they align with our vision +of cultivating a respectful, collaborative, and engaging online +environment. Again, this list isn't exhaustive but serves as an +illustration of potentially beneficial behaviors: + +A. Thoughtful Participation + Taking the time to craft well-thought-out responses, comments, or posts + that contribute to the topic at hand is highly valued. Thoughtful + participation fosters meaningful discussions and is a respectful use of + everyone's time. + +B. Active Listening + Active listening involves engaging with others' ideas, showing + understanding, and responding constructively. This behavior demonstrates + respect for others' time and effort in sharing their thoughts and fosters + an environment of mutual learning. + +C. Respectful Disagreement + Disagreements are inevitable in any community, but it's important to + handle them respectfully. Expressing disagreement in a thoughtful, + respectful manner that focuses on the idea rather than the person is a + productive use of time and enriches discussions. + +D. Asking Insightful Questions + Asking insightful questions can stimulate discussion, encourage deeper + thought, and promote mutual learning. These questions are often open-ended + and invite others to share their perspectives, experiences, or expertise. + +E. Sharing Knowledge + Sharing relevant information, expertise, or experiences that contribute to + a discussion is highly encouraged. It adds value to conversations and is a + good use of everyone's time. + +F. Constructive Feedback + Providing constructive feedback helps others improve, fosters mutual + growth, and strengthens the community. Remember to focus on the behavior + or the idea, not the person, and to communicate your feedback in a + respectful, supportive manner. + +By promoting these behaviors, we aim to cultivate a community environment +that values quality over quantity, respects each other's time, and fosters +meaningful, engaging interactions. We encourage all members to practice +these behaviors and contribute positively to the community's +signal-to-noise ratio. In this way, every interaction within our community +becomes a valuable signal, and a respectful use of everyone's time. + + + +IX. Consequences: Upholding Accountability + +Consequences for time-wasting or harmful behavior serve to uphold +accountability and maintain the respect, safety, and integrity of our +community. It is important to note that the capacity to enforce certain +consequences will depend on the specific capabilities of various digital +platforms. While we acknowledge this variability, the following is a +general guideline for understanding the potential consequences of +violating our policy. Again, this list is not exhaustive but offers a +range of possible actions: + +A. Warning + Initial minor offenses might result in a warning. This serves as an + opportunity for the offender to acknowledge their misstep and correct + their behavior. + +B. Temporary Suspension or Timeout + Repeated offenses or more severe misbehavior may result in temporary + suspension or a timeout. This punitive measure offers the offender a + period of reflection and the chance to reconsider their actions. + +C. Permanent Ban + In cases of extremely disruptive behavior or in the event of serious, + repeat offenses, a permanent ban may be enforced. This ensures the safety + and well-being of the rest of the community. + +D. Removal of Content + Certain offenses may necessitate the removal of the offender's content. + This can range from a single inappropriate comment to an entire thread, + depending on the severity of the violation. + +While the application of these consequences may vary from platform to +platform, the core principle remains the same: enforcing accountability +for harmful behaviors. We recognize that not all platforms allow for the +full range of these consequences, often providing only more extreme +measures like blocking or banning. In such cases, we encourage communities +to be judicious but firm in their enforcement. + +It's crucial to emphasize that the goal of these consequences isn't to +punish but to uphold the integrity, safety, and respect of our online +community. The same principle applies in various offline scenarios: flight +attendants wouldn't hesitate to enforce smoking prohibitions on a plane, +or if someone lit up a cigarette in a restaurant, they would be asked to +leave. These actions aren't seen as punishments, but as necessary measures +to ensure the safety and comfort of everyone else present. + +Likewise, the C3P0 policy sees the enforcement of consequences as a +commitment to creating a safe, respectful, and collaborative online +community. One person's noxious behavior can make the digital environment +unsafe and unpleasant for everyone else. By implementing consequences, +we're not punishing individuals but safeguarding the collective wellbeing +of our community. In this way, we are making the internet a better place +for everyone. + + + +X. Guidelines for Moderators: How to Use C3P0 Fairly + +The role of a moderator in implementing the C3P0 policy is crucial. It is +a challenging role that requires sensitivity, discernment, and a deep +understanding of our shared values and principles. The following +guidelines are designed to assist moderators in their role, ensuring that +they uphold the spirit of our policy while also encouraging lively, +constructive participation. + +A. Balance + The goal of our policy is not to suppress discussion or create + an environment of fear. While we adopt a Zero Tolerance stance towards + harmful behaviors, it is equally important to balance this with an + encouraging, inclusive atmosphere for genuine, well-meaning participants. + It's critical to foster an environment where users feel free to express + themselves, debate, and share ideas without fear of undue reprisal. The + aim should be to strike a balance where all users feel safe and heard, but + not silenced. + +B. Transparency + Being transparent about the rules, decisions, and actions + taken is crucial for fostering trust within the community. When enforcing + consequences, explain the reason clearly, citing the specific violation of + the policy. This clarity will not only help the individual understand + their misstep but also serve as a learning opportunity for the rest of the + community. Openly communicate any changes or updates to the community + guidelines, and provide reasons behind these modifications. Additionally, + consider creating a publicly accessible log of moderation actions (while + maintaining user privacy), which can demonstrate your commitment to + fairness and accountability. + +C. Consistency and Fairness + Treat all members of the community with equal + respect and fairness, regardless of their status or popularity. Ensure + that the policy is applied consistently to everyone. Avoid showing + favoritism or bias, as this can damage the trust and harmony within the + community. For instance, a new user violating the guidelines should + receive the same treatment as a seasoned member. In cases of rule + violation, communicate clearly about the infringement, the relevant + section of the policy it contravenes, and the subsequent action taken. By + doing so, you demonstrate transparency and uphold the principle of + fairness. + +D. Proactive Engagement + Anticipate potential issues and respond to them + before they escalate. This could involve addressing emerging conflicts, + clarifying misunderstandings, or reiterating community guidelines as + necessary. Being proactive also means guiding discussions constructively + to prevent them from spiraling into negativity or toxicity. For instance, + if you observe a conversation heating up, consider stepping in with a + reminder about respectful dialogue or steering the conversation back on + track. This proactive approach can maintain a positive environment and + prevent the need for punitive measures. + +E. Diplomacy and Empathy + The essence of moderation is not in the exercise + of power, but in diplomacy and empathy. When enforcing guidelines, + approach the situation with understanding and tact. Aim to guide rather + than chastise, keeping in mind that your goal is to foster a respectful, + constructive community. Before taking action, consider the context, the + user's history, and the potential for misunderstanding. If possible, + privately communicate with the user in question to address the issue, + explaining the violation and the necessity for the guideline. This + diplomatic approach can help resolve issues without resorting to public + penalties, which should be used as a last resort. + +Always remember that there is a person behind each username, with their +own experiences, perspectives, and feelings. Strive to foster a supportive +and understanding atmosphere where everyone feels respected and heard. +While firmness is necessary to maintain order and respect, it should +always be balanced with empathy and respect for individual dignity. + +The goal of the C3P0 is not just to penalize harmful behavior but to +actively encourage a positive, respectful, and collaborative culture. As a +moderator, your role is pivotal in shaping the tone and culture of the +community. Your fair and balanced approach to implementing this policy +will be key in creating an online space where everyone feels valued, +respected, and free to contribute. +``` From a6be04ccc784c151c00007fcc8805d4dd81d5b78 Mon Sep 17 00:00:00 2001 From: RomanGoEmpire <71276897+RomanGoEmpire@users.noreply.github.com> Date: Sun, 12 Nov 2023 14:44:47 +0100 Subject: [PATCH 056/141] further synced tool-maker --- .../__pycache__/assistant_manager.cpython-39.pyc | Bin 4322 -> 0 bytes .../__pycache__/chat_manager.cpython-39.pyc | Bin 5472 -> 0 bytes .../__pycache__/tool_manager.cpython-39.pyc | Bin 1155 -> 0 bytes agents/tool_maker/chat_manager.py | 5 +++-- 4 files changed, 3 insertions(+), 2 deletions(-) delete mode 100644 agents/tool_maker/__pycache__/assistant_manager.cpython-39.pyc delete mode 100644 agents/tool_maker/__pycache__/chat_manager.cpython-39.pyc delete mode 100644 agents/tool_maker/__pycache__/tool_manager.cpython-39.pyc diff --git a/agents/tool_maker/__pycache__/assistant_manager.cpython-39.pyc b/agents/tool_maker/__pycache__/assistant_manager.cpython-39.pyc deleted file mode 100644 index 5af6ff3cecec32396f1e7a3e3ba6160d6efdd33d..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 4322 zcmb_f-EQ2*73Pp!?rOD?e;OB6;-*t2PP?h4*hSDE!*F9XX&@Jo-6}xqtpZ}inU%QW zlJt-(WmV})xiyNSFCbm|A^IY`?M?d%DO&eCLoT(u777O~7aR_EX3m_S@0|H4mX_KU zuD`$aME&NfW&Mp_rY{FCAK}hYG}4kRvWBe381?PQ?%AJO(vj|0mUK_;o}*mZ*td0r zQ7>u?y`ERiZ1$R}dDfE6bJkl>o?4Kt=d9OOj%v#V^cL0PS?Ap9EvaR-bV~B=Evwu9 z2X?i(tXTX!Nutk0EOwRN4Cv44%f`z`xU-+3QC5#ht7l8rbEGYuudJRcp%$JE>B%OZ zo@~hlJe#sD7x8S#4mMjVu0G7NFv~@pS6hj{Li8Wo9M@_rhHCX5U+s_LAPPyz*J~# znJb+^Ua5ppBp;_nkz{9}g30z~B-Kf(bRMd?6_x52GG^ptxenvqYF8SeYjw^X62X%` z&-c`YqQ-EVeNy`1l7;`vp6bgK1}a%^$amD9I0%!GUcGedEZ9>+G2<c>gf!vbQ9M&R z6j6Aj)#kG3SPerhg z#P2$Vo2jU;8_>LP`(dPBSb7OBUw_=cx4oI=_$jr!57W44yu%ngv%CJ6}8!(zzUF2XFw{h?Qonvc& znz@2%MuiA(Y+w)I+;i(=>y#ZZef!utYn)prEdH=8QuA|8?8!d^@0j{V9wd6t*$FcN zY#uWNq51WL-zucCIzXaoZv3xI`I&gRgfU81iPjLfu4)DU^M)>N|> z5YlbJzK;wbq-!u0t60g^Fio`3V}F8U{}9Kl(lZ*qE`~MX`!ZnY8yL8Udji^@U+SMA zk;toyfc;;ws|nkeDo2t01u0j<^&6{KqgJm#5Zp@zR=-KjTh!b{Q`|7%oHtUxO%n)& zUAF`z)q%1Ce2;omo~O#d8-%2JAwR{OjJC6Em$l|F>6Mtg^;$lk2|N|9CT>%VAEA?5 zXE^1~oZKBW&e=KJC8$~hZ_pgH1`Fu5sZ^V%T|J%Z>Eg*`<9UOC2nc$cMPryq6%~t9 zM5*Y6F*wG^@Waehn{GqDi>>u7YAE%}8q}aH&$^46(AirtIp5dG@B+!@sjlf1DjIYm zW||BtZd4rPVkRw~3mJ}BI)0TYI+IfI1KOQJH5U~9UHrF!yQ&H$_>b_Tk#yyLW!FmTGkn zszdPzGKe%EhEW6>$U$`vLJ1-!IFW`4=K@0o;V#eq`g8-uP9je3rNM_!Hh7$X-SNE< z4u3uZA6w_2gY_9fW`iJ!b3yzKn}u=7;ZTja*h#<(r#Y};@@7q}pbf~pDM38UiT|Zz zF#nhX@M@huov4pOKGHEy;s}%w`&P3iMXVI5L1&<@gsxmj8Pa!!h&kU7}G7g1_%a&#d7>hv_j7S_;S|>UyH5-mH2#yZLIOE|kjTA9o zqn-%z5q2((_*BK9uK`aZkgUBVlK4C!x{+pTBoi~KiV*;~U#?HXb-vLzDjO#Sks8)>(-XHdp!Hr)jn+Y`tYt=c2sNC($aV{vLPML1VEtv+4IQuW|I=<%t7v zGv~!-cYzovaWLO6+P;taGK#3*@%?9rKslnj5U;OL!>M_P8p>cz8cb(;>j_`}ofG%G4>AA3ynIlj!r44gc4lJjF;;q@lmB#$9Hd z)VbwK78WkFa0i5vv!W{(xTWBr80We-$6}JP3mSz|c5hh# diff --git a/agents/tool_maker/__pycache__/chat_manager.cpython-39.pyc b/agents/tool_maker/__pycache__/chat_manager.cpython-39.pyc deleted file mode 100644 index cb43c42a0e1dc4f0455b1ee1e70249aee7353050..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 5472 zcmcIoTW=f372eq!mn)LGT8>huUG$Qcg{wFhB&h2qaqJ{Dnka7NG&QS$UU8PvQsgc@ zvy>ucnHL#oTEJ}r^dUf;LMrm2pV7auukB0zLE*G^zcb5~C`SqUQWA4^?sI15obP%qUjt!e+D!sO3D;bkPrA-Kj_SL?9`W0cpsdPB$4=o&q{kz{ruMqVO>W)MxV5J@rn$bV)$K2^l2&J#{Z>DS zE?-+gcIrkPci;3QedBp1&1{wH8!CRvpoDe>ZcQ%`gg+W^)SDNu?4S zE=(xATy$tL^eBFmH<6USFbFNxI$DQy^bQU*F}c3SxN%S4VBEaRa0u2)W)5T^hW0DL zMi{xY84-7btkjC51e)UZoL(R$u1?l1wa3g{4-!8sC7UAfxx~5QnFN^?M*Tr@$b^G+ z)iLCPjxBp4@6^l(HQOTYHECh&5`${!BI*1z=CazuVSRF=-#GFsyFa=unzPvbzHxvX z6D3sDO|=OzgS?m}F-Ky71YO;P&hz7ygI+(`9cw<1F2!|x=M*g``<}2QTANNn-OOK+ zN#Kop@WkO8DWe-2>4}k;`xe(z{Zs8T{eY$Vo<1^CqpgKnV(*t!Kj(mb&i=$kCQ4;) zq$c&{<~@C6y`x1}O)YVEKVjO)#?0qcA1j$k?LGFX#_iPZRPZc)W)&lD!j?z&2X+K6 zr{@Ljf|gpa3}-Fxlrl`A){y|pr&?e8X=anx)NqE-^dkz5{@L?8#< z>;s3?1G`Q2_aZXcb|F6$*_ zhnY$GG^)jtO|_!ljxsw}Jah2uV=FM=K!{j8Y6VY-LDVevO=f#ysztJga}XMHn66Td z=?_fvF}BERy2%zFoHI|_^Eb{{H?w)1Q6SpDjHWMTC=(xNHgjB=yHK`J=;3!K`pr(% z<{ewguSi4urjf`B4J7#&pv6aQ_)4PfGgt~R;((28l+4sVV1Lj)goQ34P9PcFyrZRh z$GQsK-_}I;_gb5ASkWD)s2!EyZ8k-XP8oIXhL)E8#y-?OX579D>yEsyv}3->9^>|@ zPNh>#t&W@2KE!Ig#GShaBVVT3r#jPlYic4g0s8M6BL{6WsgqiK3M-Y<^8PHZ957y8 z(vYhmH@&2d+~fU>rjj{6lTN`GWPb74ZZvNDzRZ05}3Sy1G>)Nsxs@*50j*==uQjAM3YGz#PHY zp?*BYp`Pi>7aG;R2)4s`Ae-c;kD9rAsS8tU#l8NE&(iD^MQmMvIDBCe^HK=2@p8?d^ogCZ@uGpzUo1yF0PqP>y>%_M$lP`Xb&AIYvjjxo+_QKohOJF6 zl&TLdnh-SLYGd1R$h~!FnQs+S%UziSy=CtPur$Dw3irKd)iOI_#N!=#XoyR6gg=GI zDq$oOG039=?!dqvGCM!)VfogrTloxM5J4FlC<~Q4r$UCJKu;AITf~;9h6XVqwlgfh zF9)p_Sjb46r4^S*JWk?U5cRWyd?H(rt<9k4=iH(I_1p`{;<6>RmSP?SNH+5!;v&sO zIH_ns<{lXk*v+8N!EVG4s8#U=!cavTG6P!%Um8R*b0q>6JcG7v$$mFXGBXAN&fFal zCP6oh0x6!Pkv}ByBZ&HR&Xnp@@f3**B%UVm3`Ay6XxFF}Is@*pNMnAoJdR+;otpWA z1R$A7)+5f-v_B!CSPprOP|WP?*Z4H)DOVsg$JA@!Wiz_@z%gA&+j5!7+y{<&Rvy@P zg;gHfMg_Gd)Bj^zkcfgdqF{wBupjF)`i!Asp*8&8|9~egy3pcBNJQxhlDrN;l#PVby zk1Z`Xs9&I!Xh;$;%*icfM_ zaSaVEBuPX?(_P$$^U$#tnf}n!aYGMvd$>4xBgb}pf;f<=&~oY$p2vQQOZW(U|LCZT zYcPsyP`!*TUHo$c^b(0ktfB~4wRf+NoCH^qI&>98V~GyuC2nEN2bi~tu&$?|xd`s| zU26?QVpNXYRNJqmpu!0DrTyu&OhNnhD?_3fg+VyVAfQQF;w6v-hCscXhw#KDy=XPc zji1i=S)%7nEz&`R$s*5H^i@LQnIx% z3(rH7Y}jod-p}!_{9Yd`@rHTz+N)Q^Ds@*@xhQT>{s{bzO;Wr~4TO6~it@3djd?j( z@8R`SL29v6_+70+wTJwxlkzj8>)VXgbK#LrQA)3x;tUpr$D0u z>_)k8K$)M+DfXY0Rmdd76qr}RhvopS;03lpC}y%6n?qS47NHi?zpOaqC%UR7Jaa7@ z_~+nX1DfHfpYWeElm2tGUW@on@uRmtO3HmcK?a6;WU9oV6Pw-!>4OJx-K4ZfEVJn2 zbWG|4BLw5-7L3_KTRBBd?i^#+0DC_g2Mf6`p3K4uuTtIQblNNs9eiDUYc3kqwSx>excv5^OiOU`U7Ak9-^| zCS0l4=u;BC^fsHB+Z?}7BVTDYZx8%#-XiG5LeQrsafSp1i`-qymy0l%**DY&pWK%z zuH^e)qug=SQ8`751>6KIc(0lzWxLF_he*;_;UqpNcg5X-a0!rTh!Siv)(U0s&~ip*#9x HY|i*E!uwo^ diff --git a/agents/tool_maker/__pycache__/tool_manager.cpython-39.pyc b/agents/tool_maker/__pycache__/tool_manager.cpython-39.pyc deleted file mode 100644 index 60637ec851e5ad2445e3365e8ca04bc5386d476f..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 1155 zcmZ`&Piqu06i+hQ?R47W!5*w$!lJ^0RJ<;tmI|WKt*|{U3n86k+tr;vO=c+_cTerN zkb*t>mFDWnuOL!=Z>F;yMKk2({mFaz<^3jVVngox|LP=>!N(GxCMb$rb$!kS@Ii+H6j*=z`7>mY45l>Z5T& ziFbqjzHs5OiIh;W7N=IFPE2Aj#C~exowvfJCY+2XMvDv(2$$Jen9Z!opM46%&FExj zsp<4O<5?VLlO*muxeYCSJ&^~CrN8SH{DVv4qNQh1>WbDhQQCD`(XQYOIUe_w*cG_a zkC)@#sGnrUjrPvOAXT9mRq*7_k%==oI8`vP)izKgHn@#P1kgbEPw*dVD(`2VC9<68Bo-k zdm!6$4szvkWx>l#euY%|`O|}nDxd^!Dy(s75<64UHBM~w7mWpvRb+}aZCq$4DY9YP z04KlOpbNKjdZLp*XfoAEDT9xj!BkmD%8YdoTe67!=fTokbeYAU+hRAmt)X9|9d0hW zZq3QexJ)L8A1t=ruAxAPCDoPi!7Mf_f@H~BUFi-)+&vJ)3#eayjIv*xv1`uRPrsdu zCVBrV(TC^cONmX8vIwM%5n>B-B!pWZC)%#Uk#-#g(%cY;)z+`j iq2*IXgH=XLZ`~5y(a0)`l-Km1SLeULSM6|{@BIc5=^c6i diff --git a/agents/tool_maker/chat_manager.py b/agents/tool_maker/chat_manager.py index c3bfb3e..2518dfb 100644 --- a/agents/tool_maker/chat_manager.py +++ b/agents/tool_maker/chat_manager.py @@ -1,8 +1,9 @@ -from openai import OpenAI import importlib +import json import os + +from openai import OpenAI from tool_manager import ToolManager -import json Assistant = type(OpenAI().beta.assistants.list().data[0]) Thread = type(OpenAI().beta.threads.create()) From c63a166240375b23cb313efc6564cd708ccd1e1b Mon Sep 17 00:00:00 2001 From: RomanGoEmpire <71276897+RomanGoEmpire@users.noreply.github.com> Date: Sun, 12 Nov 2023 14:44:47 +0100 Subject: [PATCH 057/141] further synced tool-maker --- .../__pycache__/assistant_manager.cpython-39.pyc | Bin 4322 -> 0 bytes .../__pycache__/chat_manager.cpython-39.pyc | Bin 5472 -> 0 bytes .../__pycache__/tool_manager.cpython-39.pyc | Bin 1155 -> 0 bytes agents/tool_maker/chat_manager.py | 5 +++-- 4 files changed, 3 insertions(+), 2 deletions(-) delete mode 100644 agents/tool_maker/__pycache__/assistant_manager.cpython-39.pyc delete mode 100644 agents/tool_maker/__pycache__/chat_manager.cpython-39.pyc delete mode 100644 agents/tool_maker/__pycache__/tool_manager.cpython-39.pyc diff --git a/agents/tool_maker/__pycache__/assistant_manager.cpython-39.pyc b/agents/tool_maker/__pycache__/assistant_manager.cpython-39.pyc deleted file mode 100644 index 5af6ff3cecec32396f1e7a3e3ba6160d6efdd33d..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 4322 zcmb_f-EQ2*73Pp!?rOD?e;OB6;-*t2PP?h4*hSDE!*F9XX&@Jo-6}xqtpZ}inU%QW zlJt-(WmV})xiyNSFCbm|A^IY`?M?d%DO&eCLoT(u777O~7aR_EX3m_S@0|H4mX_KU zuD`$aME&NfW&Mp_rY{FCAK}hYG}4kRvWBe381?PQ?%AJO(vj|0mUK_;o}*mZ*td0r zQ7>u?y`ERiZ1$R}dDfE6bJkl>o?4Kt=d9OOj%v#V^cL0PS?Ap9EvaR-bV~B=Evwu9 z2X?i(tXTX!Nutk0EOwRN4Cv44%f`z`xU-+3QC5#ht7l8rbEGYuudJRcp%$JE>B%OZ zo@~hlJe#sD7x8S#4mMjVu0G7NFv~@pS6hj{Li8Wo9M@_rhHCX5U+s_LAPPyz*J~# znJb+^Ua5ppBp;_nkz{9}g30z~B-Kf(bRMd?6_x52GG^ptxenvqYF8SeYjw^X62X%` z&-c`YqQ-EVeNy`1l7;`vp6bgK1}a%^$amD9I0%!GUcGedEZ9>+G2<c>gf!vbQ9M&R z6j6Aj)#kG3SPerhg z#P2$Vo2jU;8_>LP`(dPBSb7OBUw_=cx4oI=_$jr!57W44yu%ngv%CJ6}8!(zzUF2XFw{h?Qonvc& znz@2%MuiA(Y+w)I+;i(=>y#ZZef!utYn)prEdH=8QuA|8?8!d^@0j{V9wd6t*$FcN zY#uWNq51WL-zucCIzXaoZv3xI`I&gRgfU81iPjLfu4)DU^M)>N|> z5YlbJzK;wbq-!u0t60g^Fio`3V}F8U{}9Kl(lZ*qE`~MX`!ZnY8yL8Udji^@U+SMA zk;toyfc;;ws|nkeDo2t01u0j<^&6{KqgJm#5Zp@zR=-KjTh!b{Q`|7%oHtUxO%n)& zUAF`z)q%1Ce2;omo~O#d8-%2JAwR{OjJC6Em$l|F>6Mtg^;$lk2|N|9CT>%VAEA?5 zXE^1~oZKBW&e=KJC8$~hZ_pgH1`Fu5sZ^V%T|J%Z>Eg*`<9UOC2nc$cMPryq6%~t9 zM5*Y6F*wG^@Waehn{GqDi>>u7YAE%}8q}aH&$^46(AirtIp5dG@B+!@sjlf1DjIYm zW||BtZd4rPVkRw~3mJ}BI)0TYI+IfI1KOQJH5U~9UHrF!yQ&H$_>b_Tk#yyLW!FmTGkn zszdPzGKe%EhEW6>$U$`vLJ1-!IFW`4=K@0o;V#eq`g8-uP9je3rNM_!Hh7$X-SNE< z4u3uZA6w_2gY_9fW`iJ!b3yzKn}u=7;ZTja*h#<(r#Y};@@7q}pbf~pDM38UiT|Zz zF#nhX@M@huov4pOKGHEy;s}%w`&P3iMXVI5L1&<@gsxmj8Pa!!h&kU7}G7g1_%a&#d7>hv_j7S_;S|>UyH5-mH2#yZLIOE|kjTA9o zqn-%z5q2((_*BK9uK`aZkgUBVlK4C!x{+pTBoi~KiV*;~U#?HXb-vLzDjO#Sks8)>(-XHdp!Hr)jn+Y`tYt=c2sNC($aV{vLPML1VEtv+4IQuW|I=<%t7v zGv~!-cYzovaWLO6+P;taGK#3*@%?9rKslnj5U;OL!>M_P8p>cz8cb(;>j_`}ofG%G4>AA3ynIlj!r44gc4lJjF;;q@lmB#$9Hd z)VbwK78WkFa0i5vv!W{(xTWBr80We-$6}JP3mSz|c5hh# diff --git a/agents/tool_maker/__pycache__/chat_manager.cpython-39.pyc b/agents/tool_maker/__pycache__/chat_manager.cpython-39.pyc deleted file mode 100644 index cb43c42a0e1dc4f0455b1ee1e70249aee7353050..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 5472 zcmcIoTW=f372eq!mn)LGT8>huUG$Qcg{wFhB&h2qaqJ{Dnka7NG&QS$UU8PvQsgc@ zvy>ucnHL#oTEJ}r^dUf;LMrm2pV7auukB0zLE*G^zcb5~C`SqUQWA4^?sI15obP%qUjt!e+D!sO3D;bkPrA-Kj_SL?9`W0cpsdPB$4=o&q{kz{ruMqVO>W)MxV5J@rn$bV)$K2^l2&J#{Z>DS zE?-+gcIrkPci;3QedBp1&1{wH8!CRvpoDe>ZcQ%`gg+W^)SDNu?4S zE=(xATy$tL^eBFmH<6USFbFNxI$DQy^bQU*F}c3SxN%S4VBEaRa0u2)W)5T^hW0DL zMi{xY84-7btkjC51e)UZoL(R$u1?l1wa3g{4-!8sC7UAfxx~5QnFN^?M*Tr@$b^G+ z)iLCPjxBp4@6^l(HQOTYHECh&5`${!BI*1z=CazuVSRF=-#GFsyFa=unzPvbzHxvX z6D3sDO|=OzgS?m}F-Ky71YO;P&hz7ygI+(`9cw<1F2!|x=M*g``<}2QTANNn-OOK+ zN#Kop@WkO8DWe-2>4}k;`xe(z{Zs8T{eY$Vo<1^CqpgKnV(*t!Kj(mb&i=$kCQ4;) zq$c&{<~@C6y`x1}O)YVEKVjO)#?0qcA1j$k?LGFX#_iPZRPZc)W)&lD!j?z&2X+K6 zr{@Ljf|gpa3}-Fxlrl`A){y|pr&?e8X=anx)NqE-^dkz5{@L?8#< z>;s3?1G`Q2_aZXcb|F6$*_ zhnY$GG^)jtO|_!ljxsw}Jah2uV=FM=K!{j8Y6VY-LDVevO=f#ysztJga}XMHn66Td z=?_fvF}BERy2%zFoHI|_^Eb{{H?w)1Q6SpDjHWMTC=(xNHgjB=yHK`J=;3!K`pr(% z<{ewguSi4urjf`B4J7#&pv6aQ_)4PfGgt~R;((28l+4sVV1Lj)goQ34P9PcFyrZRh z$GQsK-_}I;_gb5ASkWD)s2!EyZ8k-XP8oIXhL)E8#y-?OX579D>yEsyv}3->9^>|@ zPNh>#t&W@2KE!Ig#GShaBVVT3r#jPlYic4g0s8M6BL{6WsgqiK3M-Y<^8PHZ957y8 z(vYhmH@&2d+~fU>rjj{6lTN`GWPb74ZZvNDzRZ05}3Sy1G>)Nsxs@*50j*==uQjAM3YGz#PHY zp?*BYp`Pi>7aG;R2)4s`Ae-c;kD9rAsS8tU#l8NE&(iD^MQmMvIDBCe^HK=2@p8?d^ogCZ@uGpzUo1yF0PqP>y>%_M$lP`Xb&AIYvjjxo+_QKohOJF6 zl&TLdnh-SLYGd1R$h~!FnQs+S%UziSy=CtPur$Dw3irKd)iOI_#N!=#XoyR6gg=GI zDq$oOG039=?!dqvGCM!)VfogrTloxM5J4FlC<~Q4r$UCJKu;AITf~;9h6XVqwlgfh zF9)p_Sjb46r4^S*JWk?U5cRWyd?H(rt<9k4=iH(I_1p`{;<6>RmSP?SNH+5!;v&sO zIH_ns<{lXk*v+8N!EVG4s8#U=!cavTG6P!%Um8R*b0q>6JcG7v$$mFXGBXAN&fFal zCP6oh0x6!Pkv}ByBZ&HR&Xnp@@f3**B%UVm3`Ay6XxFF}Is@*pNMnAoJdR+;otpWA z1R$A7)+5f-v_B!CSPprOP|WP?*Z4H)DOVsg$JA@!Wiz_@z%gA&+j5!7+y{<&Rvy@P zg;gHfMg_Gd)Bj^zkcfgdqF{wBupjF)`i!Asp*8&8|9~egy3pcBNJQxhlDrN;l#PVby zk1Z`Xs9&I!Xh;$;%*icfM_ zaSaVEBuPX?(_P$$^U$#tnf}n!aYGMvd$>4xBgb}pf;f<=&~oY$p2vQQOZW(U|LCZT zYcPsyP`!*TUHo$c^b(0ktfB~4wRf+NoCH^qI&>98V~GyuC2nEN2bi~tu&$?|xd`s| zU26?QVpNXYRNJqmpu!0DrTyu&OhNnhD?_3fg+VyVAfQQF;w6v-hCscXhw#KDy=XPc zji1i=S)%7nEz&`R$s*5H^i@LQnIx% z3(rH7Y}jod-p}!_{9Yd`@rHTz+N)Q^Ds@*@xhQT>{s{bzO;Wr~4TO6~it@3djd?j( z@8R`SL29v6_+70+wTJwxlkzj8>)VXgbK#LrQA)3x;tUpr$D0u z>_)k8K$)M+DfXY0Rmdd76qr}RhvopS;03lpC}y%6n?qS47NHi?zpOaqC%UR7Jaa7@ z_~+nX1DfHfpYWeElm2tGUW@on@uRmtO3HmcK?a6;WU9oV6Pw-!>4OJx-K4ZfEVJn2 zbWG|4BLw5-7L3_KTRBBd?i^#+0DC_g2Mf6`p3K4uuTtIQblNNs9eiDUYc3kqwSx>excv5^OiOU`U7Ak9-^| zCS0l4=u;BC^fsHB+Z?}7BVTDYZx8%#-XiG5LeQrsafSp1i`-qymy0l%**DY&pWK%z zuH^e)qug=SQ8`751>6KIc(0lzWxLF_he*;_;UqpNcg5X-a0!rTh!Siv)(U0s&~ip*#9x HY|i*E!uwo^ diff --git a/agents/tool_maker/__pycache__/tool_manager.cpython-39.pyc b/agents/tool_maker/__pycache__/tool_manager.cpython-39.pyc deleted file mode 100644 index 60637ec851e5ad2445e3365e8ca04bc5386d476f..0000000000000000000000000000000000000000 GIT binary patch literal 0 HcmV?d00001 literal 1155 zcmZ`&Piqu06i+hQ?R47W!5*w$!lJ^0RJ<;tmI|WKt*|{U3n86k+tr;vO=c+_cTerN zkb*t>mFDWnuOL!=Z>F;yMKk2({mFaz<^3jVVngox|LP=>!N(GxCMb$rb$!kS@Ii+H6j*=z`7>mY45l>Z5T& ziFbqjzHs5OiIh;W7N=IFPE2Aj#C~exowvfJCY+2XMvDv(2$$Jen9Z!opM46%&FExj zsp<4O<5?VLlO*muxeYCSJ&^~CrN8SH{DVv4qNQh1>WbDhQQCD`(XQYOIUe_w*cG_a zkC)@#sGnrUjrPvOAXT9mRq*7_k%==oI8`vP)izKgHn@#P1kgbEPw*dVD(`2VC9<68Bo-k zdm!6$4szvkWx>l#euY%|`O|}nDxd^!Dy(s75<64UHBM~w7mWpvRb+}aZCq$4DY9YP z04KlOpbNKjdZLp*XfoAEDT9xj!BkmD%8YdoTe67!=fTokbeYAU+hRAmt)X9|9d0hW zZq3QexJ)L8A1t=ruAxAPCDoPi!7Mf_f@H~BUFi-)+&vJ)3#eayjIv*xv1`uRPrsdu zCVBrV(TC^cONmX8vIwM%5n>B-B!pWZC)%#Uk#-#g(%cY;)z+`j iq2*IXgH=XLZ`~5y(a0)`l-Km1SLeULSM6|{@BIc5=^c6i diff --git a/agents/tool_maker/chat_manager.py b/agents/tool_maker/chat_manager.py index c3bfb3e..2518dfb 100644 --- a/agents/tool_maker/chat_manager.py +++ b/agents/tool_maker/chat_manager.py @@ -1,8 +1,9 @@ -from openai import OpenAI import importlib +import json import os + +from openai import OpenAI from tool_manager import ToolManager -import json Assistant = type(OpenAI().beta.assistants.list().data[0]) Thread = type(OpenAI().beta.threads.create()) From d316de70a3a90b12354221c37c7b14195bfbcb5f Mon Sep 17 00:00:00 2001 From: RomanGoEmpire <71276897+RomanGoEmpire@users.noreply.github.com> Date: Sun, 12 Nov 2023 15:04:45 +0100 Subject: [PATCH 058/141] reset chat_manager --- agents/tool_maker/chat_manager.py | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/agents/tool_maker/chat_manager.py b/agents/tool_maker/chat_manager.py index 2518dfb..c3bfb3e 100644 --- a/agents/tool_maker/chat_manager.py +++ b/agents/tool_maker/chat_manager.py @@ -1,9 +1,8 @@ +from openai import OpenAI import importlib -import json import os - -from openai import OpenAI from tool_manager import ToolManager +import json Assistant = type(OpenAI().beta.assistants.list().data[0]) Thread = type(OpenAI().beta.threads.create()) From 41012b2ba3e4186b129a03689ad9a1970cd1ee66 Mon Sep 17 00:00:00 2001 From: RomanGoEmpire <71276897+RomanGoEmpire@users.noreply.github.com> Date: Sun, 12 Nov 2023 15:04:45 +0100 Subject: [PATCH 059/141] reset chat_manager --- agents/tool_maker/chat_manager.py | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/agents/tool_maker/chat_manager.py b/agents/tool_maker/chat_manager.py index 2518dfb..c3bfb3e 100644 --- a/agents/tool_maker/chat_manager.py +++ b/agents/tool_maker/chat_manager.py @@ -1,9 +1,8 @@ +from openai import OpenAI import importlib -import json import os - -from openai import OpenAI from tool_manager import ToolManager +import json Assistant = type(OpenAI().beta.assistants.list().data[0]) Thread = type(OpenAI().beta.threads.create()) From 86cb39444af73c9a9f34fd4b83fe395602732dee Mon Sep 17 00:00:00 2001 From: Oliver Wiggins-Hay Date: Sun, 12 Nov 2023 14:29:07 +0000 Subject: [PATCH 060/141] Dynamic paths with new file structure Fixing previously static paths that were not functioning with the new file structure --- agents/tool_maker/assistant_manager.py | 8 +++++++- agents/tool_maker/chat_manager.py | 7 ++++++- 2 files changed, 13 insertions(+), 2 deletions(-) diff --git a/agents/tool_maker/assistant_manager.py b/agents/tool_maker/assistant_manager.py index fe13bf2..3f0a3da 100644 --- a/agents/tool_maker/assistant_manager.py +++ b/agents/tool_maker/assistant_manager.py @@ -1,4 +1,6 @@ from tool_manager import ToolManager +from pathlib import Path +import os import json @@ -32,7 +34,11 @@ class AssistantManager: def __init__(self, client): self.client = client self.assistant = None - with open("tool_maker/tool_creator_metadata.json", "r") as file: + Path(__file__).absolute().parent + tools_path = os.path.join( + Path(__file__).absolute().parent, "tool_creator_metadata.json" + ) + with open(tools_path, "r") as file: self.assistant_package = json.load(file) def get_assistant(self): diff --git a/agents/tool_maker/chat_manager.py b/agents/tool_maker/chat_manager.py index 7db0633..39e2c98 100644 --- a/agents/tool_maker/chat_manager.py +++ b/agents/tool_maker/chat_manager.py @@ -1,4 +1,5 @@ import importlib +from pathlib import Path from tool_manager import ToolManager import json import os @@ -11,7 +12,11 @@ class ChatManager: def __init__(self, client: OpenAI): self.client = client - self.functions_path = "tool_maker/python_functions" + Path(__file__).absolute().parent + functions_path = os.path.join( + Path(__file__).absolute().parent, "python_functions" + ) + self.functions_path = functions_path def create_thread_from_user_input(self): return self.client.beta.threads.create( From 5a0f6c9eeced03688d65e37daa880b8c3da4cea7 Mon Sep 17 00:00:00 2001 From: Oliver Wiggins-Hay Date: Sun, 12 Nov 2023 14:29:07 +0000 Subject: [PATCH 061/141] Dynamic paths with new file structure Fixing previously static paths that were not functioning with the new file structure --- agents/tool_maker/assistant_manager.py | 8 +++++++- agents/tool_maker/chat_manager.py | 7 ++++++- 2 files changed, 13 insertions(+), 2 deletions(-) diff --git a/agents/tool_maker/assistant_manager.py b/agents/tool_maker/assistant_manager.py index fe13bf2..3f0a3da 100644 --- a/agents/tool_maker/assistant_manager.py +++ b/agents/tool_maker/assistant_manager.py @@ -1,4 +1,6 @@ from tool_manager import ToolManager +from pathlib import Path +import os import json @@ -32,7 +34,11 @@ class AssistantManager: def __init__(self, client): self.client = client self.assistant = None - with open("tool_maker/tool_creator_metadata.json", "r") as file: + Path(__file__).absolute().parent + tools_path = os.path.join( + Path(__file__).absolute().parent, "tool_creator_metadata.json" + ) + with open(tools_path, "r") as file: self.assistant_package = json.load(file) def get_assistant(self): diff --git a/agents/tool_maker/chat_manager.py b/agents/tool_maker/chat_manager.py index 7db0633..39e2c98 100644 --- a/agents/tool_maker/chat_manager.py +++ b/agents/tool_maker/chat_manager.py @@ -1,4 +1,5 @@ import importlib +from pathlib import Path from tool_manager import ToolManager import json import os @@ -11,7 +12,11 @@ class ChatManager: def __init__(self, client: OpenAI): self.client = client - self.functions_path = "tool_maker/python_functions" + Path(__file__).absolute().parent + functions_path = os.path.join( + Path(__file__).absolute().parent, "python_functions" + ) + self.functions_path = functions_path def create_thread_from_user_input(self): return self.client.beta.threads.create( From 8d2d38cfea36875e381f76d5d9596ed630694969 Mon Sep 17 00:00:00 2001 From: alex Date: Sat, 11 Nov 2023 17:56:14 +0000 Subject: [PATCH 062/141] centralise client setup and access of api key --- shared/agent_connector/connect.py | 10 ++-------- shared/openai_config.py | 11 +++++++++++ 2 files changed, 13 insertions(+), 8 deletions(-) create mode 100644 shared/openai_config.py diff --git a/shared/agent_connector/connect.py b/shared/agent_connector/connect.py index e7df655..6d7abcc 100644 --- a/shared/agent_connector/connect.py +++ b/shared/agent_connector/connect.py @@ -1,18 +1,12 @@ import yaml -from openai import OpenAI +from shared.openai_config import get_openai_client import os -import dotenv -dotenv.load_dotenv() import queue as queueModule import time import threading agents_path = 'agents' -api_key = os.getenv('OPENAI_API_KEY') -if api_key is None: - raise ValueError('The OPENAI_API_KEY environment variable is not set.') - -client = OpenAI(api_key=api_key) +client = get_openai_client() # Get the directory name of the current script script_dir = os.path.dirname(os.path.abspath(__file__)) diff --git a/shared/openai_config.py b/shared/openai_config.py new file mode 100644 index 0000000..254ee96 --- /dev/null +++ b/shared/openai_config.py @@ -0,0 +1,11 @@ +import os +import dotenv +from openai import OpenAI + +dotenv.load_dotenv() + +def get_openai_client(): + api_key = os.getenv('OPENAI_API_KEY') + if api_key is None: + raise ValueError('The OPENAI_API_KEY environment variable is not set.') + return OpenAI(api_key=api_key) \ No newline at end of file From 42939a4f2d0c9b333afe311a11a2883b85d9bdbf Mon Sep 17 00:00:00 2001 From: alex Date: Sat, 11 Nov 2023 17:56:14 +0000 Subject: [PATCH 063/141] centralise client setup and access of api key --- shared/agent_connector/connect.py | 10 ++-------- shared/openai_config.py | 11 +++++++++++ 2 files changed, 13 insertions(+), 8 deletions(-) create mode 100644 shared/openai_config.py diff --git a/shared/agent_connector/connect.py b/shared/agent_connector/connect.py index e7df655..6d7abcc 100644 --- a/shared/agent_connector/connect.py +++ b/shared/agent_connector/connect.py @@ -1,18 +1,12 @@ import yaml -from openai import OpenAI +from shared.openai_config import get_openai_client import os -import dotenv -dotenv.load_dotenv() import queue as queueModule import time import threading agents_path = 'agents' -api_key = os.getenv('OPENAI_API_KEY') -if api_key is None: - raise ValueError('The OPENAI_API_KEY environment variable is not set.') - -client = OpenAI(api_key=api_key) +client = get_openai_client() # Get the directory name of the current script script_dir = os.path.dirname(os.path.abspath(__file__)) diff --git a/shared/openai_config.py b/shared/openai_config.py new file mode 100644 index 0000000..254ee96 --- /dev/null +++ b/shared/openai_config.py @@ -0,0 +1,11 @@ +import os +import dotenv +from openai import OpenAI + +dotenv.load_dotenv() + +def get_openai_client(): + api_key = os.getenv('OPENAI_API_KEY') + if api_key is None: + raise ValueError('The OPENAI_API_KEY environment variable is not set.') + return OpenAI(api_key=api_key) \ No newline at end of file From 34266b7241354e659f5d24c5f926db7cd4d9aa9d Mon Sep 17 00:00:00 2001 From: alex Date: Sat, 11 Nov 2023 23:03:57 +0000 Subject: [PATCH 064/141] PR Feedback: Use Pydantic to create a central place to load env vars from --- shared/openai_config.py | 11 +++-------- shared/settings.py | 8 ++++++++ 2 files changed, 11 insertions(+), 8 deletions(-) create mode 100644 shared/settings.py diff --git a/shared/openai_config.py b/shared/openai_config.py index 254ee96..810821c 100644 --- a/shared/openai_config.py +++ b/shared/openai_config.py @@ -1,11 +1,6 @@ -import os -import dotenv +from shared.settings import Settings from openai import OpenAI -dotenv.load_dotenv() - def get_openai_client(): - api_key = os.getenv('OPENAI_API_KEY') - if api_key is None: - raise ValueError('The OPENAI_API_KEY environment variable is not set.') - return OpenAI(api_key=api_key) \ No newline at end of file + settings = Settings() + return OpenAI(api_key=settings.OPENAI_API_KEY) diff --git a/shared/settings.py b/shared/settings.py new file mode 100644 index 0000000..2f45d37 --- /dev/null +++ b/shared/settings.py @@ -0,0 +1,8 @@ +from pydantic import BaseSettings + +class Settings(BaseSettings): + OPENAI_API_KEY: str + + class Config: + env_file = '.env' + env_file_encoding = 'utf-8' From 71b5d5ff1d5e0b43d116f4984d9b9c587d1055e0 Mon Sep 17 00:00:00 2001 From: alex Date: Sat, 11 Nov 2023 23:03:57 +0000 Subject: [PATCH 065/141] PR Feedback: Use Pydantic to create a central place to load env vars from --- shared/openai_config.py | 11 +++-------- shared/settings.py | 8 ++++++++ 2 files changed, 11 insertions(+), 8 deletions(-) create mode 100644 shared/settings.py diff --git a/shared/openai_config.py b/shared/openai_config.py index 254ee96..810821c 100644 --- a/shared/openai_config.py +++ b/shared/openai_config.py @@ -1,11 +1,6 @@ -import os -import dotenv +from shared.settings import Settings from openai import OpenAI -dotenv.load_dotenv() - def get_openai_client(): - api_key = os.getenv('OPENAI_API_KEY') - if api_key is None: - raise ValueError('The OPENAI_API_KEY environment variable is not set.') - return OpenAI(api_key=api_key) \ No newline at end of file + settings = Settings() + return OpenAI(api_key=settings.OPENAI_API_KEY) diff --git a/shared/settings.py b/shared/settings.py new file mode 100644 index 0000000..2f45d37 --- /dev/null +++ b/shared/settings.py @@ -0,0 +1,8 @@ +from pydantic import BaseSettings + +class Settings(BaseSettings): + OPENAI_API_KEY: str + + class Config: + env_file = '.env' + env_file_encoding = 'utf-8' From 47413c8d511bddc4a7c7991202173d48a8903c50 Mon Sep 17 00:00:00 2001 From: alex Date: Sat, 11 Nov 2023 23:36:00 +0000 Subject: [PATCH 066/141] Use pydantic_settings, as this is current best practice. Update 2 other places we set up OpenAI clients for consistency --- agents/tool_maker/tool_creator.py | 4 ++-- agents/tool_maker/tool_user.py | 4 ++-- shared/settings.py | 2 +- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/agents/tool_maker/tool_creator.py b/agents/tool_maker/tool_creator.py index 1295662..4322f00 100644 --- a/agents/tool_maker/tool_creator.py +++ b/agents/tool_maker/tool_creator.py @@ -6,9 +6,9 @@ import os from shared.utils import chat as chat_loop +from shared.openai_config import get_openai_client -from openai import OpenAI -client = OpenAI() # be sure to set your OPENAI_API_KEY environment variable +client = get_openai_client() def create_tool_creator(assistant_details): # create the assistant diff --git a/agents/tool_maker/tool_user.py b/agents/tool_maker/tool_user.py index 9891a87..fd91e91 100644 --- a/agents/tool_maker/tool_user.py +++ b/agents/tool_maker/tool_user.py @@ -6,9 +6,9 @@ import json from shared.utils import chat as chat_loop +from shared.openai_config import get_openai_client -from openai import OpenAI -client = OpenAI() # be sure to set your OPENAI_API_KEY environment variable +client = get_openai_client() def create_tool_user(assistant_details): # create the assistant diff --git a/shared/settings.py b/shared/settings.py index 2f45d37..3047296 100644 --- a/shared/settings.py +++ b/shared/settings.py @@ -1,4 +1,4 @@ -from pydantic import BaseSettings +from pydantic_settings import BaseSettings class Settings(BaseSettings): OPENAI_API_KEY: str From 0e45fe17686709d49e877b4b9d64520ebebe7852 Mon Sep 17 00:00:00 2001 From: alex Date: Sat, 11 Nov 2023 23:36:00 +0000 Subject: [PATCH 067/141] Use pydantic_settings, as this is current best practice. Update 2 other places we set up OpenAI clients for consistency --- agents/tool_maker/tool_creator.py | 4 ++-- agents/tool_maker/tool_user.py | 4 ++-- shared/settings.py | 2 +- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/agents/tool_maker/tool_creator.py b/agents/tool_maker/tool_creator.py index 1295662..4322f00 100644 --- a/agents/tool_maker/tool_creator.py +++ b/agents/tool_maker/tool_creator.py @@ -6,9 +6,9 @@ import os from shared.utils import chat as chat_loop +from shared.openai_config import get_openai_client -from openai import OpenAI -client = OpenAI() # be sure to set your OPENAI_API_KEY environment variable +client = get_openai_client() def create_tool_creator(assistant_details): # create the assistant diff --git a/agents/tool_maker/tool_user.py b/agents/tool_maker/tool_user.py index 9891a87..fd91e91 100644 --- a/agents/tool_maker/tool_user.py +++ b/agents/tool_maker/tool_user.py @@ -6,9 +6,9 @@ import json from shared.utils import chat as chat_loop +from shared.openai_config import get_openai_client -from openai import OpenAI -client = OpenAI() # be sure to set your OPENAI_API_KEY environment variable +client = get_openai_client() def create_tool_user(assistant_details): # create the assistant diff --git a/shared/settings.py b/shared/settings.py index 2f45d37..3047296 100644 --- a/shared/settings.py +++ b/shared/settings.py @@ -1,4 +1,4 @@ -from pydantic import BaseSettings +from pydantic_settings import BaseSettings class Settings(BaseSettings): OPENAI_API_KEY: str From 440ad45e6c3fdc409bbe49c013d555d5a17e0afe Mon Sep 17 00:00:00 2001 From: alex Date: Sun, 12 Nov 2023 15:38:52 +0000 Subject: [PATCH 068/141] Handle merge conflicts + introduce get_openai_client in more places --- agents/agent_builder/create.py | 10 ++-------- agents/tool_maker/assistant_manager.py | 8 +++----- agents/tool_maker/unit_manager.py | 9 ++------- 3 files changed, 7 insertions(+), 20 deletions(-) diff --git a/agents/agent_builder/create.py b/agents/agent_builder/create.py index e94b9f9..14f350c 100644 --- a/agents/agent_builder/create.py +++ b/agents/agent_builder/create.py @@ -1,15 +1,9 @@ -from openai import OpenAI import os import json -import dotenv -dotenv.load_dotenv() +from shared.openai_config import get_openai_client agents_path = 'agents' -api_key = os.getenv('OPENAI_API_KEY') -if api_key is None: - raise ValueError('The OPENAI_API_KEY environment variable is not set.') - -client = OpenAI(api_key=api_key) +client = get_openai_client() # Check if the 'agents' folder is empty or doesn't exist if not os.path.exists(agents_path) or not os.path.isdir(agents_path) or not os.listdir(agents_path): diff --git a/agents/tool_maker/assistant_manager.py b/agents/tool_maker/assistant_manager.py index 3f0a3da..a7d7544 100644 --- a/agents/tool_maker/assistant_manager.py +++ b/agents/tool_maker/assistant_manager.py @@ -101,11 +101,9 @@ def make_coding_assistant(self): if __name__ == "__main__": - from openai import OpenAI - import os - - apikey = os.getenv("OPENAI_API_KEY") - client = OpenAI(api_key=apikey) + from shared.openai_config import get_openai_client + client = get_openai_client() + assistant_manager = AssistantManager(client=client) assistant = assistant_manager.get_assistant() print(assistant) diff --git a/agents/tool_maker/unit_manager.py b/agents/tool_maker/unit_manager.py index 290b1fd..231f118 100644 --- a/agents/tool_maker/unit_manager.py +++ b/agents/tool_maker/unit_manager.py @@ -27,13 +27,8 @@ def chat(self): if __name__ == "__main__": - import os - from openai import OpenAI - - api_key = os.getenv("OPENAI_API_KEY") - if api_key is None: - raise ValueError("The OPENAI_API_KEY environment variable is not set.") - client = OpenAI(api_key=api_key) + from shared.openai_config import get_openai_client + client = get_openai_client() unit = Unit(client=client) unit.chat() From ee86fdca75e05660a3f84862c2aa4499af7de793 Mon Sep 17 00:00:00 2001 From: alex Date: Sun, 12 Nov 2023 15:38:52 +0000 Subject: [PATCH 069/141] Handle merge conflicts + introduce get_openai_client in more places --- agents/agent_builder/create.py | 10 ++-------- agents/tool_maker/assistant_manager.py | 8 +++----- agents/tool_maker/unit_manager.py | 9 ++------- 3 files changed, 7 insertions(+), 20 deletions(-) diff --git a/agents/agent_builder/create.py b/agents/agent_builder/create.py index e94b9f9..14f350c 100644 --- a/agents/agent_builder/create.py +++ b/agents/agent_builder/create.py @@ -1,15 +1,9 @@ -from openai import OpenAI import os import json -import dotenv -dotenv.load_dotenv() +from shared.openai_config import get_openai_client agents_path = 'agents' -api_key = os.getenv('OPENAI_API_KEY') -if api_key is None: - raise ValueError('The OPENAI_API_KEY environment variable is not set.') - -client = OpenAI(api_key=api_key) +client = get_openai_client() # Check if the 'agents' folder is empty or doesn't exist if not os.path.exists(agents_path) or not os.path.isdir(agents_path) or not os.listdir(agents_path): diff --git a/agents/tool_maker/assistant_manager.py b/agents/tool_maker/assistant_manager.py index 3f0a3da..a7d7544 100644 --- a/agents/tool_maker/assistant_manager.py +++ b/agents/tool_maker/assistant_manager.py @@ -101,11 +101,9 @@ def make_coding_assistant(self): if __name__ == "__main__": - from openai import OpenAI - import os - - apikey = os.getenv("OPENAI_API_KEY") - client = OpenAI(api_key=apikey) + from shared.openai_config import get_openai_client + client = get_openai_client() + assistant_manager = AssistantManager(client=client) assistant = assistant_manager.get_assistant() print(assistant) diff --git a/agents/tool_maker/unit_manager.py b/agents/tool_maker/unit_manager.py index 290b1fd..231f118 100644 --- a/agents/tool_maker/unit_manager.py +++ b/agents/tool_maker/unit_manager.py @@ -27,13 +27,8 @@ def chat(self): if __name__ == "__main__": - import os - from openai import OpenAI - - api_key = os.getenv("OPENAI_API_KEY") - if api_key is None: - raise ValueError("The OPENAI_API_KEY environment variable is not set.") - client = OpenAI(api_key=api_key) + from shared.openai_config import get_openai_client + client = get_openai_client() unit = Unit(client=client) unit.chat() From c2bf239c950d2f9a8fd0c006713efd8ea906c706 Mon Sep 17 00:00:00 2001 From: Tomasz Kasperczyk Date: Sun, 12 Nov 2023 16:55:04 +0100 Subject: [PATCH 070/141] Add 'gpts' folder with .md files for GPT definitions --- gpts/HAAS Assistant.md | 109 +++++++++++++++++++++++++++++++++++++++++ gpts/README.md | 46 +++++++++++++++++ 2 files changed, 155 insertions(+) create mode 100644 gpts/HAAS Assistant.md create mode 100644 gpts/README.md diff --git a/gpts/HAAS Assistant.md b/gpts/HAAS Assistant.md new file mode 100644 index 0000000..051c5c1 --- /dev/null +++ b/gpts/HAAS Assistant.md @@ -0,0 +1,109 @@ +## Name: +`HAAS Assistant` + +## Description: +`An interactive assistant for the Hierarchical Autonomous Agent Swarm` + +## Instructions: +``` +As HAAS Guide, your primary function is to serve as an interactive assistant for the HAAS repository (OpenAI_Agent_Swarm), specializing in providing up-to-date information and guidance. Your capabilities are centered around the GitHub GraphQL API, which you use to fetch and present information about issues, pull requests, commits, discussions and repository files upon user requests. It is absolutely crucial to limit the queries to this repository alone: +owner: daveshap +name: OpenAI_Agent_Swarm + +Your key responsibilities include: + +1. Summarizing open and closed issues in the repository, providing users with clear, concise overviews. +2. Keeping users informed about recent commits and pull requests, detailing the nature and impact of these updates. +3. Displaying contents from specific files within the repository, with a particular focus on the README file to aid newcomers. +4. Guiding users through the repository's content, helping them locate specific items or understand its overall structure. +5. Responding to queries about the repository's status, ongoing work, resolved issues, and upcoming features or fixes. +6. Encouraging and moderating discussions related to the repository, using relevant topics, questions, or updates as conversation starters. + +In your interactions, you adhere to the following guidelines: + +- Provide factual information based on repository data, avoiding personal opinions or biases. +- Avoid handling sensitive personal data of users. +- Use clear, concise language, maintaining a user-friendly approach. +- Maintain a respectful, professional tone in all interactions. +- Gracefully handle errors or misunderstandings, offering clarification or additional assistance as needed. +- Continuously improve by learning from user interactions, feedback, and updates to the repository. +- Adhere to privacy and security best practices, ensuring secure handling of user data and interactions. + +When responding, focus on delivering concise and relevant answers. For instance, if asked about contributing to the repository, fetch open issues using the API, analyze them, and suggest suitable contribution opportunities. If identifying a file needing improvement, review files using the API and recommend the one most in need of enhancement. Base all responses on real-time, accurate data from the API, avoiding generic statements not grounded in the retrieved information. If the user asks about your thoughts on the given subject, analyze the relevant resource and instead of simply describing it, respond with what you think about it. Your aim is to provide clear, final responses, focusing on clarity and brevity, and tailoring your answers to the user's specific requests. + +When interacting with the API, you will have to wrap each request in a JSON payload, for example: + +`curl -H "Authorization: bearer " -X POST -d "{\"query\":\"query { repository(owner: \\\"daveshap\\\", name: \\\"OpenAI_Agent_Swarm\\\") { ref(qualifiedName: \\\"main\\\") { target { ... on Commit { history(first: 10) { edges { node { oid, message, author { name, email } } } } } } } } }\"}" https://api.github.com/graphql` + +You are strictly forbidden from interacting with any other GitHub repositories. +``` + +## Action 1 - GraphQL: + +### Definition: + +```JSON +{ + "openapi": "3.1.0", + "info": { + "title": "OpenAI_Agent_Swarm GitHub repo GraphQL API", + "description": "Interact with the OpenAI_Agent_Swarm GitHub repo using GraphQL. Sample query: curl -X POST -H 'Content-Type: application/json' -d '{\"query\": \"query { repository(owner: \\\"daveshap\\\", name: \\\"OpenAI_Agent_Swarm\\\") { issues(first: 10, states: OPEN) { edges { node { title url createdAt } } } } }\"}' https://api.github.com/graphql", + "version": "v1.0.0" + }, + "servers": [ + { + "url": "https://api.github.com" + } + ], + "paths": { + "/graphql": { + "post": { + "description": "GraphQL endpoint for the OpenAI_Agent_Swarm GitHub repo.", + "operationId": "graphqlEndpoint_v3", + "requestBody": { + "description": "GraphQL query in a stringified JSON format", + "required": true, + "content": { + "application/json": { + "schema": { + "type": "object", + "properties": { + "query": { + "type": "string" + } + }, + "required": [ + "query" + ] + } + } + } + } + } + } + }, + "components": { + "schemas": { + "Error": { + "type": "object", + "properties": { + "message": { + "type": "string" + } + } + } + } + } +} +``` + +### Privacy Policy: +``` +https://docs.github.com/en/site-policy/privacy-policies/github-privacy-statement +``` + +### Authorization: +``` +Type: API Key, Bearer +Key: +``` \ No newline at end of file diff --git a/gpts/README.md b/gpts/README.md new file mode 100644 index 0000000..7987d10 --- /dev/null +++ b/gpts/README.md @@ -0,0 +1,46 @@ +# GPT Definitions + +Contributions to the definitions of these GPTs are welcome through pull requests. The maintainer of each GPT is responsible for syncing these changes with the configuration inside ChatGPT. + +## HAAS Board Concierge +A GPT tailored to assist users in navigating the discussion board, answering queries, and highlighting trending topics. + +### Features +- Assisting with board navigation. +- Answering user queries. +- Identifying and summarizing trending topics. + +### Maintainer +[@CunninghamTim](https://github.com/CunninghamTim) + +### Definition +Unknown + +## HAAS Assistant +A virtual assistant for our project, enhancing interactions by navigating and analyzing the codebase. + +### Features +- Code repository navigation and analysis. +- Project roadmap assistance. +- File-specific collaboration support. +- Incorporates all functionalities of the HAAS Board Concierge, including discussion board assistance and query handling. + +### Maintainer +[@TKAsperczyk](https://github.com/TKasperczyk/) + +### Definition +[HAAS Assistant.md](https://github.com/daveshap/OpenAI_Agent_Swarm/blob/main/gpts/HAAS%20Assistant.md) + +## Puppet Master (Future Project) +A future GPT designed to instantiate and manage the swarm. + +### Planned Features +- Secure API key integration. +- OpenAI API interaction. +- Communication with instantiated agents. + +### Maintainer +TBD + +### Definition +TBD \ No newline at end of file From 34b80f8cd154225ed0656c8eae29b22f3ad4293a Mon Sep 17 00:00:00 2001 From: Tomasz Kasperczyk Date: Sun, 12 Nov 2023 16:55:04 +0100 Subject: [PATCH 071/141] Add 'gpts' folder with .md files for GPT definitions --- gpts/HAAS Assistant.md | 109 +++++++++++++++++++++++++++++++++++++++++ gpts/README.md | 46 +++++++++++++++++ 2 files changed, 155 insertions(+) create mode 100644 gpts/HAAS Assistant.md create mode 100644 gpts/README.md diff --git a/gpts/HAAS Assistant.md b/gpts/HAAS Assistant.md new file mode 100644 index 0000000..051c5c1 --- /dev/null +++ b/gpts/HAAS Assistant.md @@ -0,0 +1,109 @@ +## Name: +`HAAS Assistant` + +## Description: +`An interactive assistant for the Hierarchical Autonomous Agent Swarm` + +## Instructions: +``` +As HAAS Guide, your primary function is to serve as an interactive assistant for the HAAS repository (OpenAI_Agent_Swarm), specializing in providing up-to-date information and guidance. Your capabilities are centered around the GitHub GraphQL API, which you use to fetch and present information about issues, pull requests, commits, discussions and repository files upon user requests. It is absolutely crucial to limit the queries to this repository alone: +owner: daveshap +name: OpenAI_Agent_Swarm + +Your key responsibilities include: + +1. Summarizing open and closed issues in the repository, providing users with clear, concise overviews. +2. Keeping users informed about recent commits and pull requests, detailing the nature and impact of these updates. +3. Displaying contents from specific files within the repository, with a particular focus on the README file to aid newcomers. +4. Guiding users through the repository's content, helping them locate specific items or understand its overall structure. +5. Responding to queries about the repository's status, ongoing work, resolved issues, and upcoming features or fixes. +6. Encouraging and moderating discussions related to the repository, using relevant topics, questions, or updates as conversation starters. + +In your interactions, you adhere to the following guidelines: + +- Provide factual information based on repository data, avoiding personal opinions or biases. +- Avoid handling sensitive personal data of users. +- Use clear, concise language, maintaining a user-friendly approach. +- Maintain a respectful, professional tone in all interactions. +- Gracefully handle errors or misunderstandings, offering clarification or additional assistance as needed. +- Continuously improve by learning from user interactions, feedback, and updates to the repository. +- Adhere to privacy and security best practices, ensuring secure handling of user data and interactions. + +When responding, focus on delivering concise and relevant answers. For instance, if asked about contributing to the repository, fetch open issues using the API, analyze them, and suggest suitable contribution opportunities. If identifying a file needing improvement, review files using the API and recommend the one most in need of enhancement. Base all responses on real-time, accurate data from the API, avoiding generic statements not grounded in the retrieved information. If the user asks about your thoughts on the given subject, analyze the relevant resource and instead of simply describing it, respond with what you think about it. Your aim is to provide clear, final responses, focusing on clarity and brevity, and tailoring your answers to the user's specific requests. + +When interacting with the API, you will have to wrap each request in a JSON payload, for example: + +`curl -H "Authorization: bearer " -X POST -d "{\"query\":\"query { repository(owner: \\\"daveshap\\\", name: \\\"OpenAI_Agent_Swarm\\\") { ref(qualifiedName: \\\"main\\\") { target { ... on Commit { history(first: 10) { edges { node { oid, message, author { name, email } } } } } } } } }\"}" https://api.github.com/graphql` + +You are strictly forbidden from interacting with any other GitHub repositories. +``` + +## Action 1 - GraphQL: + +### Definition: + +```JSON +{ + "openapi": "3.1.0", + "info": { + "title": "OpenAI_Agent_Swarm GitHub repo GraphQL API", + "description": "Interact with the OpenAI_Agent_Swarm GitHub repo using GraphQL. Sample query: curl -X POST -H 'Content-Type: application/json' -d '{\"query\": \"query { repository(owner: \\\"daveshap\\\", name: \\\"OpenAI_Agent_Swarm\\\") { issues(first: 10, states: OPEN) { edges { node { title url createdAt } } } } }\"}' https://api.github.com/graphql", + "version": "v1.0.0" + }, + "servers": [ + { + "url": "https://api.github.com" + } + ], + "paths": { + "/graphql": { + "post": { + "description": "GraphQL endpoint for the OpenAI_Agent_Swarm GitHub repo.", + "operationId": "graphqlEndpoint_v3", + "requestBody": { + "description": "GraphQL query in a stringified JSON format", + "required": true, + "content": { + "application/json": { + "schema": { + "type": "object", + "properties": { + "query": { + "type": "string" + } + }, + "required": [ + "query" + ] + } + } + } + } + } + } + }, + "components": { + "schemas": { + "Error": { + "type": "object", + "properties": { + "message": { + "type": "string" + } + } + } + } + } +} +``` + +### Privacy Policy: +``` +https://docs.github.com/en/site-policy/privacy-policies/github-privacy-statement +``` + +### Authorization: +``` +Type: API Key, Bearer +Key: +``` \ No newline at end of file diff --git a/gpts/README.md b/gpts/README.md new file mode 100644 index 0000000..7987d10 --- /dev/null +++ b/gpts/README.md @@ -0,0 +1,46 @@ +# GPT Definitions + +Contributions to the definitions of these GPTs are welcome through pull requests. The maintainer of each GPT is responsible for syncing these changes with the configuration inside ChatGPT. + +## HAAS Board Concierge +A GPT tailored to assist users in navigating the discussion board, answering queries, and highlighting trending topics. + +### Features +- Assisting with board navigation. +- Answering user queries. +- Identifying and summarizing trending topics. + +### Maintainer +[@CunninghamTim](https://github.com/CunninghamTim) + +### Definition +Unknown + +## HAAS Assistant +A virtual assistant for our project, enhancing interactions by navigating and analyzing the codebase. + +### Features +- Code repository navigation and analysis. +- Project roadmap assistance. +- File-specific collaboration support. +- Incorporates all functionalities of the HAAS Board Concierge, including discussion board assistance and query handling. + +### Maintainer +[@TKAsperczyk](https://github.com/TKasperczyk/) + +### Definition +[HAAS Assistant.md](https://github.com/daveshap/OpenAI_Agent_Swarm/blob/main/gpts/HAAS%20Assistant.md) + +## Puppet Master (Future Project) +A future GPT designed to instantiate and manage the swarm. + +### Planned Features +- Secure API key integration. +- OpenAI API interaction. +- Communication with instantiated agents. + +### Maintainer +TBD + +### Definition +TBD \ No newline at end of file From 81789ec164d49a6d771b2d54bf1f21f1b8ebf002 Mon Sep 17 00:00:00 2001 From: David Shapiro Date: Sun, 12 Nov 2023 11:04:15 -0500 Subject: [PATCH 072/141] Update README.md --- README.md | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 5499b1c..8118799 100644 --- a/README.md +++ b/README.md @@ -7,7 +7,27 @@ We have our first GPT Concierge. You can chat with this custom ChatGPT to figure - **HAAS Board Concierge:** [https://chat.openai.com/g/g-MIssTuE2b-haas-board-concierge](https://chat.openai.com/g/g-MIssTuE2b-haas-board-concierge) - **HAAS Assistant:** [https://chat.openai.com/g/g-lIAp9qowx-haas-assistant](https://chat.openai.com/g/g-lIAp9qowx-haas-assistant) (Similar function as above but markedly faster) -## Overview +We'll have a public Discord soon. + +# Project Principles + +## Move Fast, Break Stuff + +This is first and foremost a high velocity hacking group. + +## Cutting Edge Only + +Exclusively use cutting edge stuff, like OpenAI's latest Agents endpoint. For exclusively Open Source, go check out the ACE Framework: https://github.com/daveshap/ACE_Framework + +## Full Autonomy + +Fully autonomous swarms are the goal. That means a human does not need to be in the loop telling it what to do, supervising, or anything. Characteristics of a fully autonomous swarm: + +1. **Principle or Mission Driven:** Once instantiated, the swarm pursues its mission or goals without supervision. It may self-direct based on principles such as the heuristic imperatives. This is the "self-directed" maxim. +2. **Self-Correcting:** The swarm must detect and correct technical, strategic, epistemic, and other errors and then correct them. +3. **Self-Improving:** Eventually, the swarm should enhance its own fundamental capabilities over time. + +# Overview The Hierarchical Autonomous Agent Swarm (HAAS) is a groundbreaking initiative that leverages OpenAI's latest advancements in agent-based APIs to create a self-organizing and ethically governed ecosystem of AI agents. Drawing inspiration from the ACE Framework, HAAS introduces a novel approach to AI governance and operation, where a hierarchy of specialized agents, each with distinct roles and capabilities, collaborate to solve complex problems and perform a wide array of tasks. From a9b2d3760e1d59172003be84868631837156e0d0 Mon Sep 17 00:00:00 2001 From: David Shapiro Date: Sun, 12 Nov 2023 11:04:15 -0500 Subject: [PATCH 073/141] Update README.md --- README.md | 22 +++++++++++++++++++++- 1 file changed, 21 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 5499b1c..8118799 100644 --- a/README.md +++ b/README.md @@ -7,7 +7,27 @@ We have our first GPT Concierge. You can chat with this custom ChatGPT to figure - **HAAS Board Concierge:** [https://chat.openai.com/g/g-MIssTuE2b-haas-board-concierge](https://chat.openai.com/g/g-MIssTuE2b-haas-board-concierge) - **HAAS Assistant:** [https://chat.openai.com/g/g-lIAp9qowx-haas-assistant](https://chat.openai.com/g/g-lIAp9qowx-haas-assistant) (Similar function as above but markedly faster) -## Overview +We'll have a public Discord soon. + +# Project Principles + +## Move Fast, Break Stuff + +This is first and foremost a high velocity hacking group. + +## Cutting Edge Only + +Exclusively use cutting edge stuff, like OpenAI's latest Agents endpoint. For exclusively Open Source, go check out the ACE Framework: https://github.com/daveshap/ACE_Framework + +## Full Autonomy + +Fully autonomous swarms are the goal. That means a human does not need to be in the loop telling it what to do, supervising, or anything. Characteristics of a fully autonomous swarm: + +1. **Principle or Mission Driven:** Once instantiated, the swarm pursues its mission or goals without supervision. It may self-direct based on principles such as the heuristic imperatives. This is the "self-directed" maxim. +2. **Self-Correcting:** The swarm must detect and correct technical, strategic, epistemic, and other errors and then correct them. +3. **Self-Improving:** Eventually, the swarm should enhance its own fundamental capabilities over time. + +# Overview The Hierarchical Autonomous Agent Swarm (HAAS) is a groundbreaking initiative that leverages OpenAI's latest advancements in agent-based APIs to create a self-organizing and ethically governed ecosystem of AI agents. Drawing inspiration from the ACE Framework, HAAS introduces a novel approach to AI governance and operation, where a hierarchy of specialized agents, each with distinct roles and capabilities, collaborate to solve complex problems and perform a wide array of tasks. From d7775bb7e5b8a4d8d30d7e60be9b3092b025c3cb Mon Sep 17 00:00:00 2001 From: David Shapiro Date: Sun, 12 Nov 2023 11:33:43 -0500 Subject: [PATCH 074/141] Update README.md --- README.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 8118799..0a80005 100644 --- a/README.md +++ b/README.md @@ -7,7 +7,11 @@ We have our first GPT Concierge. You can chat with this custom ChatGPT to figure - **HAAS Board Concierge:** [https://chat.openai.com/g/g-MIssTuE2b-haas-board-concierge](https://chat.openai.com/g/g-MIssTuE2b-haas-board-concierge) - **HAAS Assistant:** [https://chat.openai.com/g/g-lIAp9qowx-haas-assistant](https://chat.openai.com/g/g-lIAp9qowx-haas-assistant) (Similar function as above but markedly faster) -We'll have a public Discord soon. +## Public Discord + +The Autonomous AI Lab discord for the ACE Framework and HAAS Project is now open: https://discord.gg/mJKUYNm8qY + +> !!!! IMPORTANT NOTE: This repo is still the single source of truth! If it's not on this repo, it doesn't exist! Discord is merely for convenience. # Project Principles From 2d6f67eb328b2f09bb79ad5f72545def8bcebbf6 Mon Sep 17 00:00:00 2001 From: David Shapiro Date: Sun, 12 Nov 2023 11:33:43 -0500 Subject: [PATCH 075/141] Update README.md --- README.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 8118799..0a80005 100644 --- a/README.md +++ b/README.md @@ -7,7 +7,11 @@ We have our first GPT Concierge. You can chat with this custom ChatGPT to figure - **HAAS Board Concierge:** [https://chat.openai.com/g/g-MIssTuE2b-haas-board-concierge](https://chat.openai.com/g/g-MIssTuE2b-haas-board-concierge) - **HAAS Assistant:** [https://chat.openai.com/g/g-lIAp9qowx-haas-assistant](https://chat.openai.com/g/g-lIAp9qowx-haas-assistant) (Similar function as above but markedly faster) -We'll have a public Discord soon. +## Public Discord + +The Autonomous AI Lab discord for the ACE Framework and HAAS Project is now open: https://discord.gg/mJKUYNm8qY + +> !!!! IMPORTANT NOTE: This repo is still the single source of truth! If it's not on this repo, it doesn't exist! Discord is merely for convenience. # Project Principles From 6c05a8727a90c33fc88ab241a8e9263d0e11f83d Mon Sep 17 00:00:00 2001 From: Tomasz Kasperczyk Date: Sun, 12 Nov 2023 17:42:48 +0100 Subject: [PATCH 076/141] Fixed the directory structure - moved `gpts` into `agents` --- {gpts => agents/gpts}/HAAS Assistant.md | 0 {gpts => agents/gpts}/README.md | 0 2 files changed, 0 insertions(+), 0 deletions(-) rename {gpts => agents/gpts}/HAAS Assistant.md (100%) rename {gpts => agents/gpts}/README.md (100%) diff --git a/gpts/HAAS Assistant.md b/agents/gpts/HAAS Assistant.md similarity index 100% rename from gpts/HAAS Assistant.md rename to agents/gpts/HAAS Assistant.md diff --git a/gpts/README.md b/agents/gpts/README.md similarity index 100% rename from gpts/README.md rename to agents/gpts/README.md From 37da791a338e6c8e29a720ef1a43747c314fe0da Mon Sep 17 00:00:00 2001 From: Tomasz Kasperczyk Date: Sun, 12 Nov 2023 17:42:48 +0100 Subject: [PATCH 077/141] Fixed the directory structure - moved `gpts` into `agents` --- {gpts => agents/gpts}/HAAS Assistant.md | 0 {gpts => agents/gpts}/README.md | 0 2 files changed, 0 insertions(+), 0 deletions(-) rename {gpts => agents/gpts}/HAAS Assistant.md (100%) rename {gpts => agents/gpts}/README.md (100%) diff --git a/gpts/HAAS Assistant.md b/agents/gpts/HAAS Assistant.md similarity index 100% rename from gpts/HAAS Assistant.md rename to agents/gpts/HAAS Assistant.md diff --git a/gpts/README.md b/agents/gpts/README.md similarity index 100% rename from gpts/README.md rename to agents/gpts/README.md From 650a585f7e7a986501f9e5cf994d42a38781a886 Mon Sep 17 00:00:00 2001 From: RomanGoEmpire <71276897+RomanGoEmpire@users.noreply.github.com> Date: Sun, 12 Nov 2023 21:32:39 +0100 Subject: [PATCH 078/141] First prototype for github extension --- .../github_api_wrapper.py | 388 ++++++++++++++++++ .../githup_api_test.ipynb | 322 +++++++++++++++ 2 files changed, 710 insertions(+) create mode 100644 shared/github_communication/github_api_wrapper.py create mode 100644 shared/github_communication/githup_api_test.ipynb diff --git a/shared/github_communication/github_api_wrapper.py b/shared/github_communication/github_api_wrapper.py new file mode 100644 index 0000000..afe230e --- /dev/null +++ b/shared/github_communication/github_api_wrapper.py @@ -0,0 +1,388 @@ +import requests +from github import Auth, Github + + +class GithubAPIWrapper: + def __init__(self, api_token: str, repository_name: str) -> None: + # personal access token + self.api_token = api_token + # format: "user_name/repository_name" + self.repo_name = repository_name + self.repository = self.initialize_repository() + + def initialize_repository(self): + """ + Initialize Github repository. + + Returns: + Github repository. + """ + auth = Auth.Token(self.api_token) + return Github(auth=auth).get_repo(self.repo_name) + + def get_file_paths(self, branch_name: str) -> list[str]: + """ + Get all file names and contents from repository. + The path to the file is relative to the repository root. + + Args: + branch_name: Name of the branch. + + Returns: + List of file paths. + """ + + files = [] + contents = self.repository.get_contents("", ref=branch_name) + + for file_content in contents: + # if file is a directory, get the contents of this directory and add them to the list + if file_content.type == "dir": + contents.extend( + self.repository.get_contents(file_content.path, ref=branch_name) + ) + else: + files += [file_content.path] + return files + + def get_file_content(self, file_path: str, branch_name: str) -> str: + """ + Get file content from repository. + + Args: + file_path: Path to file. + branch_name: Name of the branch. + + Returns: + File content. + """ + + if file_path not in self.get_file_paths(branch_name): + raise FileNotFoundError( + f"File {file_path} not found in repository. Please check the file path." + ) + file = self.repository.get_contents(file_path, ref=branch_name) + content = file.decoded_content.decode("utf-8") + return content + + def create_file( + self, file_path: str, file_content: str, commit_message: str, branch_name: str + ) -> requests.Response: + """ + Create file in repository. + + Args: + file_path: Path to file. + file_content: Content of file. + commit_message: Commit message. + branch_name: Name of the branch. + """ + + response = self.repository.create_file( + file_path, commit_message, file_content, branch=branch_name + ) + return response + + def update_file( + self, file_path: str, file_content: str, commit_message: str, branch_name: str + ) -> requests.Response: + """ + Update file in repository. + + Args: + file_path: Path to file. + file_content: Content of file. + commit_message: Commit message. + branch_name: Name of the branch. + """ + + if file_path not in self.get_file_paths(branch_name): + raise FileNotFoundError( + f"File {file_path} not found in repository. Please check the file path." + ) + + file = self.repository.get_contents(file_path, ref=branch_name) + response = self.repository.update_file( + file_path, commit_message, file_content, file.sha, branch=branch_name + ) + return response + + def delete_file( + self, file_path: str, commit_message: str, branch_name: str + ) -> requests.Response: + """ + Delete file in repository. + + Args: + file_path: Path to file. + commit_message: Commit message. + branch_name: Name of the branch. + """ + + if file_path not in self.get_file_paths(branch_name): + raise FileNotFoundError( + f"File {file_path} not found in repository. Please check the file path." + ) + + file = self.repository.get_contents(file_path, ref=branch_name) + response = self.repository.delete_file( + file_path, commit_message, file.sha, branch=branch_name + ) + return response + + def get_branches(self) -> list[str]: + """ + Get all branches from repository. + + Returns: + List of branches. + """ + + branches = self.repository.get_branches() + branches = [branch.name for branch in branches] + return branches + + def create_branch( + self, branch_name: str, from_branch: str = "main" + ) -> requests.Response: + """ + Create branch in repository. + + Args: + branch_name: Name of branch. + from_branch: Name of branch to branch from. Default is "main". + """ + + brances = self.get_branches() + for branch in brances: + if branch.name == branch_name: + raise ValueError( + f"Branch {branch_name} already exists. Please choose another branch name." + ) + + response = self.repository.create_git_ref( + ref=f"refs/heads/{branch_name}", + sha=self.repository.get_branch(from_branch).commit.sha, + ) + return response + + def delete_branch(self, branch_name: str) -> requests.Response: + """ + Delete branch in repository. + + Args: + branch_name: Name of branch. + """ + + if branch_name not in [branch.name for branch in self.get_branches()]: + raise ValueError( + f"Branch {branch_name} does not exist. Please choose another branch name." + ) + + response = self.repository.get_git_ref(f"heads/{branch_name}").delete() + return response + + def get_pull_requests(self, state: str = "open") -> list[requests.Response]: + """ + Get all pull requests from repository. + + Args: + state: State of pull requests. Default is "open". + + Returns: + List of numbers of pull requests. + """ + pull_requests = self.repository.get_pulls(state=state) + return pull_requests + + def create_pull_request( + self, + title: str, + body: str, + head: str, + base: str = "main", + draft: bool = False, + ) -> requests.Response: + """ + Create pull request in repository. + + Args: + title: Title of pull request. + body: Body of pull request. + head: Name of branch to merge from. + base: Name of branch to merge to. Default is "main". + draft: Whether pull request is a draft. Default is False. + """ + + if head not in [branch.name for branch in self.get_branches()]: + raise ValueError( + f"Branch {head} does not exist. Please choose another branch name." + ) + + response = self.repository.create_pull( + title=title, body=body, head=head, base=base, draft=draft + ) + return response + + def get_pull_request(self, pull_request_number: int) -> requests.Response: + """ + Get pull request from repository. + + Args: + pull_request_number: Number of pull request. + + Returns: + Pull request. + """ + + pull_request = self.repository.get_pull(pull_request_number) + return pull_request + + def update_pull_request( + self, pull_request_number: int, title: str, body: str + ) -> requests.Response: + """ + Update pull request in repository. + + Args: + pull_request_number: Number of pull request. + title: Title of pull request. + body: Body of pull request. + """ + + pull_request = self.get_pull_request(pull_request_number) + response = pull_request.edit(title=title, body=body) + return response + + def merge_pull_request(self, pull_request_number: int) -> requests.Response: + """ + Merge pull request in repository. + + Args: + pull_request_number: Number of pull request. + """ + + pull_request = self.get_pull_request(pull_request_number) + + # check if pull request can be merged + if pull_request.mergeable_state != "clean": + raise ValueError( + f"Pull request {pull_request_number} cannot be merged. Please check the pull request." + ) + + response = pull_request.merge() + return response + + def comment_on_pull_request(self, pull_request_number: int, comment: str) -> None: + """ + Comment on pull request in repository. + + Args: + pull_request_number: Number of pull request. + comment: Comment. + """ + + pull_request = self.get_pull_request(pull_request_number) + response = pull_request.create_issue_comment(comment) + + return response + + def close_pull_request(self, pull_request_number: int) -> requests.Response: + """ + Close pull request in repository. + + Args: + pull_request_number: Number of pull request. + """ + + pull_request = self.get_pull_request(pull_request_number) + response = pull_request.edit(state="closed") + return response + + def get_issues(self, state: str = "open") -> list[requests.Response]: + """ + Get all issues from repository. + + Args: + state: State of issues. Default is "open". + + Returns: + List of issues. + """ + + issues = self.repository.get_issues(state=state) + issues_numbers = [issue.number for issue in issues] + return issues_numbers + + def create_issue( + self, + title: str, + body: str, + ) -> requests.Response: + """ + Create issue in repository. + + Args: + title: Title of issue. + body: Body of issue. + assignee: Name of assignee. Default is None. + milestone: Number of milestone. Default is None. + labels: List of labels. Default is None. + """ + + response = self.repository.create_issue( + title=title, + body=body, + ) + return response + + def get_issue(self, issue_number: int) -> requests.Response: + """ + Get issue from repository. + + Args: + issue_number: Number of issue. + + Returns: + Issue. + """ + + issue = self.repository.get_issue(issue_number) + return issue + + def update_issue(self, issue_number: int, title: str, body: str) -> None: + """ + Update issue in repository. + + Args: + issue_number: Number of issue. + title: Title of issue. + body: Body of issue. + """ + + issue = self.get_issue(issue_number) + issue.edit(title=title, body=body) + + def close_issue(self, issue_number: int) -> None: + """ + Close issue in repository. + + Args: + issue_number: Number of issue. + """ + + issue = self.get_issue(issue_number) + issue.edit(state="closed") + + def comment_on_issue(self, issue_number: int, comment: str) -> requests.Response: + """ + Comment on issue in repository. + + Args: + issue_number: Number of issue. + comment: Comment. + """ + + issue = self.get_issue(issue_number) + response = issue.create_comment(comment) + return response diff --git a/shared/github_communication/githup_api_test.ipynb b/shared/github_communication/githup_api_test.ipynb new file mode 100644 index 0000000..25e13ed --- /dev/null +++ b/shared/github_communication/githup_api_test.ipynb @@ -0,0 +1,322 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "from github_api_wrapper import GithubAPIWrapper\n" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [], + "source": [ + "GITHUB_TOKEN = os.getenv(\"GITHUB_TOKEN\")\n", + "USER_NAME = # your github user name\n", + "REPOSITORY_NAME = # your repository name\n", + "g = GithubAPIWrapper(GITHUB_TOKEN, f\"{USER_NAME}/{REPOSITORY_NAME}\")" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "['README.md', 'create_file.txt', 'folder/test.py', 'folder/test.txt', 'test_branch/test.py']\n" + ] + } + ], + "source": [ + "print(g.get_file_paths(\"main\"))" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "# api_controlled_github\n" + ] + } + ], + "source": [ + "print(g.get_file_content(\"README.md\", \"main\"))\n" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\n" + ] + } + ], + "source": [ + "\n", + "print(g.get_branches())\n" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "GitRef(ref=\"refs/heads/test\")\n" + ] + } + ], + "source": [ + "print(g.create_branch(\"test\"))\n" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{'content': ContentFile(path=\"test.txt\"), 'commit': Commit(sha=\"a20a7102161ef99c0138148150366a404d0b6594\")}\n" + ] + } + ], + "source": [ + "print(g.create_file(\"test.txt\", \"test\", \"add test.txt\", \"test\"))\n" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{'commit': Commit(sha=\"62ca8853747b224983f1ee7ccc110966c00de03f\"), 'content': ContentFile(path=\"test.txt\")}\n" + ] + } + ], + "source": [ + "print(g.update_file(\"test.txt\", \"test2\", \"update test.txt\", \"test\"))\n" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{'commit': Commit(sha=\"199e933d7718f2bdbfcc61e53ec203f87a11f333\"), 'content': NotSet}\n" + ] + } + ], + "source": [ + "print(g.delete_file(\"test.txt\", \"delete test.txt\", \"test\"))\n" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "PullRequest(title=\"add test.txt\", number=4)\n" + ] + } + ], + "source": [ + "print(g.create_pull_request(\"add test.txt\", \"add test.txt\", \"test\", \"main\"))" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\n" + ] + } + ], + "source": [ + "print(g.get_pull_requests())\n" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "IssueComment(user=NamedUser(login=\"RomanGoEmpire\"), id=1807230315)\n" + ] + } + ], + "source": [ + "print(g.comment_on_pull_request(4, \"text comment\"))" + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "PullRequestMergeStatus(sha=\"19234f6f6e66578a729ccfcff2f496b1788e414b\", merged=True)\n" + ] + } + ], + "source": [ + "\n", + "print(g.merge_pull_request(4))\n" + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\n" + ] + } + ], + "source": [ + "\n", + "print(g.get_issues())" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Issue(title=\"Added by API\", number=7)\n" + ] + } + ], + "source": [ + "print(g.create_issue(\"Added by API\", \"Added by API\"))" + ] + }, + { + "cell_type": "code", + "execution_count": 18, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "IssueComment(user=NamedUser(login=\"RomanGoEmpire\"), id=1807223791)\n" + ] + } + ], + "source": [ + "print(g.comment_on_issue(2, \"Commented by API\"))" + ] + }, + { + "cell_type": "code", + "execution_count": 19, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "None\n" + ] + } + ], + "source": [ + "print(g.update_issue(2, \"Updated by API\", \"Updated by API\"))\n" + ] + }, + { + "cell_type": "code", + "execution_count": 20, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "None\n" + ] + } + ], + "source": [ + "print(g.close_issue(2))\n" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.0" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} From 4daf2df01712ee694bd86cbcf8c197e7d7f0d182 Mon Sep 17 00:00:00 2001 From: RomanGoEmpire <71276897+RomanGoEmpire@users.noreply.github.com> Date: Sun, 12 Nov 2023 21:32:39 +0100 Subject: [PATCH 079/141] First prototype for github extension --- .../github_api_wrapper.py | 388 ++++++++++++++++++ .../githup_api_test.ipynb | 322 +++++++++++++++ 2 files changed, 710 insertions(+) create mode 100644 shared/github_communication/github_api_wrapper.py create mode 100644 shared/github_communication/githup_api_test.ipynb diff --git a/shared/github_communication/github_api_wrapper.py b/shared/github_communication/github_api_wrapper.py new file mode 100644 index 0000000..afe230e --- /dev/null +++ b/shared/github_communication/github_api_wrapper.py @@ -0,0 +1,388 @@ +import requests +from github import Auth, Github + + +class GithubAPIWrapper: + def __init__(self, api_token: str, repository_name: str) -> None: + # personal access token + self.api_token = api_token + # format: "user_name/repository_name" + self.repo_name = repository_name + self.repository = self.initialize_repository() + + def initialize_repository(self): + """ + Initialize Github repository. + + Returns: + Github repository. + """ + auth = Auth.Token(self.api_token) + return Github(auth=auth).get_repo(self.repo_name) + + def get_file_paths(self, branch_name: str) -> list[str]: + """ + Get all file names and contents from repository. + The path to the file is relative to the repository root. + + Args: + branch_name: Name of the branch. + + Returns: + List of file paths. + """ + + files = [] + contents = self.repository.get_contents("", ref=branch_name) + + for file_content in contents: + # if file is a directory, get the contents of this directory and add them to the list + if file_content.type == "dir": + contents.extend( + self.repository.get_contents(file_content.path, ref=branch_name) + ) + else: + files += [file_content.path] + return files + + def get_file_content(self, file_path: str, branch_name: str) -> str: + """ + Get file content from repository. + + Args: + file_path: Path to file. + branch_name: Name of the branch. + + Returns: + File content. + """ + + if file_path not in self.get_file_paths(branch_name): + raise FileNotFoundError( + f"File {file_path} not found in repository. Please check the file path." + ) + file = self.repository.get_contents(file_path, ref=branch_name) + content = file.decoded_content.decode("utf-8") + return content + + def create_file( + self, file_path: str, file_content: str, commit_message: str, branch_name: str + ) -> requests.Response: + """ + Create file in repository. + + Args: + file_path: Path to file. + file_content: Content of file. + commit_message: Commit message. + branch_name: Name of the branch. + """ + + response = self.repository.create_file( + file_path, commit_message, file_content, branch=branch_name + ) + return response + + def update_file( + self, file_path: str, file_content: str, commit_message: str, branch_name: str + ) -> requests.Response: + """ + Update file in repository. + + Args: + file_path: Path to file. + file_content: Content of file. + commit_message: Commit message. + branch_name: Name of the branch. + """ + + if file_path not in self.get_file_paths(branch_name): + raise FileNotFoundError( + f"File {file_path} not found in repository. Please check the file path." + ) + + file = self.repository.get_contents(file_path, ref=branch_name) + response = self.repository.update_file( + file_path, commit_message, file_content, file.sha, branch=branch_name + ) + return response + + def delete_file( + self, file_path: str, commit_message: str, branch_name: str + ) -> requests.Response: + """ + Delete file in repository. + + Args: + file_path: Path to file. + commit_message: Commit message. + branch_name: Name of the branch. + """ + + if file_path not in self.get_file_paths(branch_name): + raise FileNotFoundError( + f"File {file_path} not found in repository. Please check the file path." + ) + + file = self.repository.get_contents(file_path, ref=branch_name) + response = self.repository.delete_file( + file_path, commit_message, file.sha, branch=branch_name + ) + return response + + def get_branches(self) -> list[str]: + """ + Get all branches from repository. + + Returns: + List of branches. + """ + + branches = self.repository.get_branches() + branches = [branch.name for branch in branches] + return branches + + def create_branch( + self, branch_name: str, from_branch: str = "main" + ) -> requests.Response: + """ + Create branch in repository. + + Args: + branch_name: Name of branch. + from_branch: Name of branch to branch from. Default is "main". + """ + + brances = self.get_branches() + for branch in brances: + if branch.name == branch_name: + raise ValueError( + f"Branch {branch_name} already exists. Please choose another branch name." + ) + + response = self.repository.create_git_ref( + ref=f"refs/heads/{branch_name}", + sha=self.repository.get_branch(from_branch).commit.sha, + ) + return response + + def delete_branch(self, branch_name: str) -> requests.Response: + """ + Delete branch in repository. + + Args: + branch_name: Name of branch. + """ + + if branch_name not in [branch.name for branch in self.get_branches()]: + raise ValueError( + f"Branch {branch_name} does not exist. Please choose another branch name." + ) + + response = self.repository.get_git_ref(f"heads/{branch_name}").delete() + return response + + def get_pull_requests(self, state: str = "open") -> list[requests.Response]: + """ + Get all pull requests from repository. + + Args: + state: State of pull requests. Default is "open". + + Returns: + List of numbers of pull requests. + """ + pull_requests = self.repository.get_pulls(state=state) + return pull_requests + + def create_pull_request( + self, + title: str, + body: str, + head: str, + base: str = "main", + draft: bool = False, + ) -> requests.Response: + """ + Create pull request in repository. + + Args: + title: Title of pull request. + body: Body of pull request. + head: Name of branch to merge from. + base: Name of branch to merge to. Default is "main". + draft: Whether pull request is a draft. Default is False. + """ + + if head not in [branch.name for branch in self.get_branches()]: + raise ValueError( + f"Branch {head} does not exist. Please choose another branch name." + ) + + response = self.repository.create_pull( + title=title, body=body, head=head, base=base, draft=draft + ) + return response + + def get_pull_request(self, pull_request_number: int) -> requests.Response: + """ + Get pull request from repository. + + Args: + pull_request_number: Number of pull request. + + Returns: + Pull request. + """ + + pull_request = self.repository.get_pull(pull_request_number) + return pull_request + + def update_pull_request( + self, pull_request_number: int, title: str, body: str + ) -> requests.Response: + """ + Update pull request in repository. + + Args: + pull_request_number: Number of pull request. + title: Title of pull request. + body: Body of pull request. + """ + + pull_request = self.get_pull_request(pull_request_number) + response = pull_request.edit(title=title, body=body) + return response + + def merge_pull_request(self, pull_request_number: int) -> requests.Response: + """ + Merge pull request in repository. + + Args: + pull_request_number: Number of pull request. + """ + + pull_request = self.get_pull_request(pull_request_number) + + # check if pull request can be merged + if pull_request.mergeable_state != "clean": + raise ValueError( + f"Pull request {pull_request_number} cannot be merged. Please check the pull request." + ) + + response = pull_request.merge() + return response + + def comment_on_pull_request(self, pull_request_number: int, comment: str) -> None: + """ + Comment on pull request in repository. + + Args: + pull_request_number: Number of pull request. + comment: Comment. + """ + + pull_request = self.get_pull_request(pull_request_number) + response = pull_request.create_issue_comment(comment) + + return response + + def close_pull_request(self, pull_request_number: int) -> requests.Response: + """ + Close pull request in repository. + + Args: + pull_request_number: Number of pull request. + """ + + pull_request = self.get_pull_request(pull_request_number) + response = pull_request.edit(state="closed") + return response + + def get_issues(self, state: str = "open") -> list[requests.Response]: + """ + Get all issues from repository. + + Args: + state: State of issues. Default is "open". + + Returns: + List of issues. + """ + + issues = self.repository.get_issues(state=state) + issues_numbers = [issue.number for issue in issues] + return issues_numbers + + def create_issue( + self, + title: str, + body: str, + ) -> requests.Response: + """ + Create issue in repository. + + Args: + title: Title of issue. + body: Body of issue. + assignee: Name of assignee. Default is None. + milestone: Number of milestone. Default is None. + labels: List of labels. Default is None. + """ + + response = self.repository.create_issue( + title=title, + body=body, + ) + return response + + def get_issue(self, issue_number: int) -> requests.Response: + """ + Get issue from repository. + + Args: + issue_number: Number of issue. + + Returns: + Issue. + """ + + issue = self.repository.get_issue(issue_number) + return issue + + def update_issue(self, issue_number: int, title: str, body: str) -> None: + """ + Update issue in repository. + + Args: + issue_number: Number of issue. + title: Title of issue. + body: Body of issue. + """ + + issue = self.get_issue(issue_number) + issue.edit(title=title, body=body) + + def close_issue(self, issue_number: int) -> None: + """ + Close issue in repository. + + Args: + issue_number: Number of issue. + """ + + issue = self.get_issue(issue_number) + issue.edit(state="closed") + + def comment_on_issue(self, issue_number: int, comment: str) -> requests.Response: + """ + Comment on issue in repository. + + Args: + issue_number: Number of issue. + comment: Comment. + """ + + issue = self.get_issue(issue_number) + response = issue.create_comment(comment) + return response diff --git a/shared/github_communication/githup_api_test.ipynb b/shared/github_communication/githup_api_test.ipynb new file mode 100644 index 0000000..25e13ed --- /dev/null +++ b/shared/github_communication/githup_api_test.ipynb @@ -0,0 +1,322 @@ +{ + "cells": [ + { + "cell_type": "code", + "execution_count": 1, + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "from github_api_wrapper import GithubAPIWrapper\n" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [], + "source": [ + "GITHUB_TOKEN = os.getenv(\"GITHUB_TOKEN\")\n", + "USER_NAME = # your github user name\n", + "REPOSITORY_NAME = # your repository name\n", + "g = GithubAPIWrapper(GITHUB_TOKEN, f\"{USER_NAME}/{REPOSITORY_NAME}\")" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "['README.md', 'create_file.txt', 'folder/test.py', 'folder/test.txt', 'test_branch/test.py']\n" + ] + } + ], + "source": [ + "print(g.get_file_paths(\"main\"))" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "# api_controlled_github\n" + ] + } + ], + "source": [ + "print(g.get_file_content(\"README.md\", \"main\"))\n" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\n" + ] + } + ], + "source": [ + "\n", + "print(g.get_branches())\n" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "GitRef(ref=\"refs/heads/test\")\n" + ] + } + ], + "source": [ + "print(g.create_branch(\"test\"))\n" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{'content': ContentFile(path=\"test.txt\"), 'commit': Commit(sha=\"a20a7102161ef99c0138148150366a404d0b6594\")}\n" + ] + } + ], + "source": [ + "print(g.create_file(\"test.txt\", \"test\", \"add test.txt\", \"test\"))\n" + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{'commit': Commit(sha=\"62ca8853747b224983f1ee7ccc110966c00de03f\"), 'content': ContentFile(path=\"test.txt\")}\n" + ] + } + ], + "source": [ + "print(g.update_file(\"test.txt\", \"test2\", \"update test.txt\", \"test\"))\n" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "{'commit': Commit(sha=\"199e933d7718f2bdbfcc61e53ec203f87a11f333\"), 'content': NotSet}\n" + ] + } + ], + "source": [ + "print(g.delete_file(\"test.txt\", \"delete test.txt\", \"test\"))\n" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "PullRequest(title=\"add test.txt\", number=4)\n" + ] + } + ], + "source": [ + "print(g.create_pull_request(\"add test.txt\", \"add test.txt\", \"test\", \"main\"))" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\n" + ] + } + ], + "source": [ + "print(g.get_pull_requests())\n" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "IssueComment(user=NamedUser(login=\"RomanGoEmpire\"), id=1807230315)\n" + ] + } + ], + "source": [ + "print(g.comment_on_pull_request(4, \"text comment\"))" + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "PullRequestMergeStatus(sha=\"19234f6f6e66578a729ccfcff2f496b1788e414b\", merged=True)\n" + ] + } + ], + "source": [ + "\n", + "print(g.merge_pull_request(4))\n" + ] + }, + { + "cell_type": "code", + "execution_count": 16, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "\n" + ] + } + ], + "source": [ + "\n", + "print(g.get_issues())" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "Issue(title=\"Added by API\", number=7)\n" + ] + } + ], + "source": [ + "print(g.create_issue(\"Added by API\", \"Added by API\"))" + ] + }, + { + "cell_type": "code", + "execution_count": 18, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "IssueComment(user=NamedUser(login=\"RomanGoEmpire\"), id=1807223791)\n" + ] + } + ], + "source": [ + "print(g.comment_on_issue(2, \"Commented by API\"))" + ] + }, + { + "cell_type": "code", + "execution_count": 19, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "None\n" + ] + } + ], + "source": [ + "print(g.update_issue(2, \"Updated by API\", \"Updated by API\"))\n" + ] + }, + { + "cell_type": "code", + "execution_count": 20, + "metadata": {}, + "outputs": [ + { + "name": "stdout", + "output_type": "stream", + "text": [ + "None\n" + ] + } + ], + "source": [ + "print(g.close_issue(2))\n" + ] + } + ], + "metadata": { + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.0" + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} From 2d477f893b801e130bf91418155b30eaa9ac4568 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Guillermo=20del=20R=C3=ADo?= Date: Sun, 12 Nov 2023 22:43:30 +0100 Subject: [PATCH 080/141] Document agent functions --- agent_functions/README.md | 46 ++++++++++++++++++++++++++++++++++++--- 1 file changed, 43 insertions(+), 3 deletions(-) diff --git a/agent_functions/README.md b/agent_functions/README.md index 2b8ee47..ce36d0c 100644 --- a/agent_functions/README.md +++ b/agent_functions/README.md @@ -1,4 +1,44 @@ -Agents may respond with messages that are not meant to be propagated. +# Objective +This folder includes some initial tests on how function calling may enable HAAS communication and cognition. A network with a limited number of agents is created as a test to identify issues and help guide the architecture: one boss talking to three worker agents. -To solve that issue the topology is kept static but agents are given the option to send the messages to their contacts only if they see fit. -This is implemented via function calling. +# Observations +## Choosing what to propagate +The simplest approach when connecting multiple agents is to have the downstream agents get all the messages from their source nodes. This limits the capacity of the model greatly. +Instead it is possible to give the agents the possibility to decide which messages to propagate, by instructing them to call a *sendMessage* function. + +## Function specificity +There's value in having more specific and meaningful functions instead of general all-purpose ones. Eg.: assignTask vs sendMessage +Advantages: +- Semantic cues for the model directly in the function declaration. More lightweight system prompts. +- Easier to program custom behaviours for different functions. No need to switch based on parameters. + +## Channels +There's value on a basic "sendMessage" function. But the moment multiple agents need to work on a task it makes sense to introduce the concept of *channels* which agents may be part of and to which they may broadcast messages. +Note: all agents of a channel will receive messages queued there EXCEPT the one that sent it. + +## Peer chatter and race conditions +With a basic boss-worker x3 topology it was clear that the workers had trouble having effective communication. They often step on each other and take a long time to further the discussion, making this system very token inefficient. +To prevent that, some prompt engineering strategies can be employed, eg.: *If you receive a message from other workers don't reply back unless necessary. Keep the worker channel as free from noise as possible. Share results in the channel to advance the mission, but do not send acknowledgements.* + +However, once the system prompt gets to a certain level of complexity it is hard to ensure that the agents will follow the rules consistently. In that sense some effort has been put in analysing off-model strategies to improve the communication. + +## Blocking communication +The boss agent originally used *sendMessage* to pass the tasks to the workers. As it suffered from similar issues, *assignTask* was created. It functions almost identically to *sendMessage* except by the fact that it waits for the actual response before unblocking the run. +There's limitations to this approach such as the run expiry time, however it has proven very effective so far with simple tasks. + +Following that pattern, one may modify the *broadcast* function so it waits for a response from one of the peers. + +## Raise hand function and queue +The key issue behind the miscommunication between peer agents is that race conditions during model executions make it so some agents may broadcast a response before they've catched up with all the new messages in the channel. Effectively introducing lag in the conversation. + +To combat that a "raise hand" system can be used. Implemented as a multi-thread semaphore, only one agent may send messages to the channel at a certain time. + +Any agent may request to be put into the *raised hands queue*. An agent may only appear once in the queue. +Once an agent turn is reached, it will first be given all the new messages in the channel, thus ensuring there's no lag in the conversation. Only then will it be allowed to broadcast its message. +That will require some delicate work with the functions and the prompts, as it's a mandatory hybrid model+framework functionality: the framework can't do it without the model collaboration and vice-versa. + +This will allow for many options down the road. Mixing concepts from telecom and psychology it may be interesting to configure agents with different messaging priorities, ie.: able to jump places in the queue. So we would be modelling more "pushy" agent personalities by giving them access to use high priority messaging. + +## Thinking functions +By introducing a *thinking* function, agents may indicate they’re not ready to provide an answer. The introduction of self-prompting provides breathing room to the model and reduces the chances of subpar messages propagating through the network. +From the framework side it will be be seen as an initial function call (*thinking*) which will be immediately answered with an ACK. Afterwards a that thread would be prompted to *continue working*. \ No newline at end of file From 21b0d60eb5224259b52315cf0e332e1d3bd6869e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Guillermo=20del=20R=C3=ADo?= Date: Sun, 12 Nov 2023 22:43:30 +0100 Subject: [PATCH 081/141] Document agent functions --- agent_functions/README.md | 46 ++++++++++++++++++++++++++++++++++++--- 1 file changed, 43 insertions(+), 3 deletions(-) diff --git a/agent_functions/README.md b/agent_functions/README.md index 2b8ee47..ce36d0c 100644 --- a/agent_functions/README.md +++ b/agent_functions/README.md @@ -1,4 +1,44 @@ -Agents may respond with messages that are not meant to be propagated. +# Objective +This folder includes some initial tests on how function calling may enable HAAS communication and cognition. A network with a limited number of agents is created as a test to identify issues and help guide the architecture: one boss talking to three worker agents. -To solve that issue the topology is kept static but agents are given the option to send the messages to their contacts only if they see fit. -This is implemented via function calling. +# Observations +## Choosing what to propagate +The simplest approach when connecting multiple agents is to have the downstream agents get all the messages from their source nodes. This limits the capacity of the model greatly. +Instead it is possible to give the agents the possibility to decide which messages to propagate, by instructing them to call a *sendMessage* function. + +## Function specificity +There's value in having more specific and meaningful functions instead of general all-purpose ones. Eg.: assignTask vs sendMessage +Advantages: +- Semantic cues for the model directly in the function declaration. More lightweight system prompts. +- Easier to program custom behaviours for different functions. No need to switch based on parameters. + +## Channels +There's value on a basic "sendMessage" function. But the moment multiple agents need to work on a task it makes sense to introduce the concept of *channels* which agents may be part of and to which they may broadcast messages. +Note: all agents of a channel will receive messages queued there EXCEPT the one that sent it. + +## Peer chatter and race conditions +With a basic boss-worker x3 topology it was clear that the workers had trouble having effective communication. They often step on each other and take a long time to further the discussion, making this system very token inefficient. +To prevent that, some prompt engineering strategies can be employed, eg.: *If you receive a message from other workers don't reply back unless necessary. Keep the worker channel as free from noise as possible. Share results in the channel to advance the mission, but do not send acknowledgements.* + +However, once the system prompt gets to a certain level of complexity it is hard to ensure that the agents will follow the rules consistently. In that sense some effort has been put in analysing off-model strategies to improve the communication. + +## Blocking communication +The boss agent originally used *sendMessage* to pass the tasks to the workers. As it suffered from similar issues, *assignTask* was created. It functions almost identically to *sendMessage* except by the fact that it waits for the actual response before unblocking the run. +There's limitations to this approach such as the run expiry time, however it has proven very effective so far with simple tasks. + +Following that pattern, one may modify the *broadcast* function so it waits for a response from one of the peers. + +## Raise hand function and queue +The key issue behind the miscommunication between peer agents is that race conditions during model executions make it so some agents may broadcast a response before they've catched up with all the new messages in the channel. Effectively introducing lag in the conversation. + +To combat that a "raise hand" system can be used. Implemented as a multi-thread semaphore, only one agent may send messages to the channel at a certain time. + +Any agent may request to be put into the *raised hands queue*. An agent may only appear once in the queue. +Once an agent turn is reached, it will first be given all the new messages in the channel, thus ensuring there's no lag in the conversation. Only then will it be allowed to broadcast its message. +That will require some delicate work with the functions and the prompts, as it's a mandatory hybrid model+framework functionality: the framework can't do it without the model collaboration and vice-versa. + +This will allow for many options down the road. Mixing concepts from telecom and psychology it may be interesting to configure agents with different messaging priorities, ie.: able to jump places in the queue. So we would be modelling more "pushy" agent personalities by giving them access to use high priority messaging. + +## Thinking functions +By introducing a *thinking* function, agents may indicate they’re not ready to provide an answer. The introduction of self-prompting provides breathing room to the model and reduces the chances of subpar messages propagating through the network. +From the framework side it will be be seen as an initial function call (*thinking*) which will be immediately answered with an ACK. Afterwards a that thread would be prompted to *continue working*. \ No newline at end of file From e60439f79a2d64f8a2cc2bb1621a9f67d0b0ef84 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Guillermo=20del=20R=C3=ADo?= Date: Sun, 12 Nov 2023 22:45:18 +0100 Subject: [PATCH 082/141] Reorder --- {agent_functions => agents/agent_functions}/README.md | 0 {agent_functions => agents/agent_functions}/agents.yaml | 0 {agent_functions => agents/agent_functions}/connect.py | 0 .../agent_functions}/examples/text-manipulation/agents.yaml | 0 .../agent_functions}/examples/text-manipulation/prompts.md | 0 {agent_functions => agents/agent_functions}/functions.md | 0 {agent_functions => agents/agent_functions}/prompts.md | 0 7 files changed, 0 insertions(+), 0 deletions(-) rename {agent_functions => agents/agent_functions}/README.md (100%) rename {agent_functions => agents/agent_functions}/agents.yaml (100%) rename {agent_functions => agents/agent_functions}/connect.py (100%) rename {agent_functions => agents/agent_functions}/examples/text-manipulation/agents.yaml (100%) rename {agent_functions => agents/agent_functions}/examples/text-manipulation/prompts.md (100%) rename {agent_functions => agents/agent_functions}/functions.md (100%) rename {agent_functions => agents/agent_functions}/prompts.md (100%) diff --git a/agent_functions/README.md b/agents/agent_functions/README.md similarity index 100% rename from agent_functions/README.md rename to agents/agent_functions/README.md diff --git a/agent_functions/agents.yaml b/agents/agent_functions/agents.yaml similarity index 100% rename from agent_functions/agents.yaml rename to agents/agent_functions/agents.yaml diff --git a/agent_functions/connect.py b/agents/agent_functions/connect.py similarity index 100% rename from agent_functions/connect.py rename to agents/agent_functions/connect.py diff --git a/agent_functions/examples/text-manipulation/agents.yaml b/agents/agent_functions/examples/text-manipulation/agents.yaml similarity index 100% rename from agent_functions/examples/text-manipulation/agents.yaml rename to agents/agent_functions/examples/text-manipulation/agents.yaml diff --git a/agent_functions/examples/text-manipulation/prompts.md b/agents/agent_functions/examples/text-manipulation/prompts.md similarity index 100% rename from agent_functions/examples/text-manipulation/prompts.md rename to agents/agent_functions/examples/text-manipulation/prompts.md diff --git a/agent_functions/functions.md b/agents/agent_functions/functions.md similarity index 100% rename from agent_functions/functions.md rename to agents/agent_functions/functions.md diff --git a/agent_functions/prompts.md b/agents/agent_functions/prompts.md similarity index 100% rename from agent_functions/prompts.md rename to agents/agent_functions/prompts.md From e32f86eb0e3c235b5be0e7fd226d01039fd05f08 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Guillermo=20del=20R=C3=ADo?= Date: Sun, 12 Nov 2023 22:45:18 +0100 Subject: [PATCH 083/141] Reorder --- {agent_functions => agents/agent_functions}/README.md | 0 {agent_functions => agents/agent_functions}/agents.yaml | 0 {agent_functions => agents/agent_functions}/connect.py | 0 .../agent_functions}/examples/text-manipulation/agents.yaml | 0 .../agent_functions}/examples/text-manipulation/prompts.md | 0 {agent_functions => agents/agent_functions}/functions.md | 0 {agent_functions => agents/agent_functions}/prompts.md | 0 7 files changed, 0 insertions(+), 0 deletions(-) rename {agent_functions => agents/agent_functions}/README.md (100%) rename {agent_functions => agents/agent_functions}/agents.yaml (100%) rename {agent_functions => agents/agent_functions}/connect.py (100%) rename {agent_functions => agents/agent_functions}/examples/text-manipulation/agents.yaml (100%) rename {agent_functions => agents/agent_functions}/examples/text-manipulation/prompts.md (100%) rename {agent_functions => agents/agent_functions}/functions.md (100%) rename {agent_functions => agents/agent_functions}/prompts.md (100%) diff --git a/agent_functions/README.md b/agents/agent_functions/README.md similarity index 100% rename from agent_functions/README.md rename to agents/agent_functions/README.md diff --git a/agent_functions/agents.yaml b/agents/agent_functions/agents.yaml similarity index 100% rename from agent_functions/agents.yaml rename to agents/agent_functions/agents.yaml diff --git a/agent_functions/connect.py b/agents/agent_functions/connect.py similarity index 100% rename from agent_functions/connect.py rename to agents/agent_functions/connect.py diff --git a/agent_functions/examples/text-manipulation/agents.yaml b/agents/agent_functions/examples/text-manipulation/agents.yaml similarity index 100% rename from agent_functions/examples/text-manipulation/agents.yaml rename to agents/agent_functions/examples/text-manipulation/agents.yaml diff --git a/agent_functions/examples/text-manipulation/prompts.md b/agents/agent_functions/examples/text-manipulation/prompts.md similarity index 100% rename from agent_functions/examples/text-manipulation/prompts.md rename to agents/agent_functions/examples/text-manipulation/prompts.md diff --git a/agent_functions/functions.md b/agents/agent_functions/functions.md similarity index 100% rename from agent_functions/functions.md rename to agents/agent_functions/functions.md diff --git a/agent_functions/prompts.md b/agents/agent_functions/prompts.md similarity index 100% rename from agent_functions/prompts.md rename to agents/agent_functions/prompts.md From df29bcfe654cc336ebeb8dce33d29df4864b9dbf Mon Sep 17 00:00:00 2001 From: RomanGoEmpire <71276897+RomanGoEmpire@users.noreply.github.com> Date: Sun, 12 Nov 2023 22:49:38 +0100 Subject: [PATCH 084/141] created README.md --- shared/github_communication/README.md | 12 ++++++++++++ 1 file changed, 12 insertions(+) create mode 100644 shared/github_communication/README.md diff --git a/shared/github_communication/README.md b/shared/github_communication/README.md new file mode 100644 index 0000000..6183d96 --- /dev/null +++ b/shared/github_communication/README.md @@ -0,0 +1,12 @@ +# GitHub API Wrapper + +This tool, leveraging [PyGithub](https://github.com/PyGithub/PyGithub), enables communication with GitHub. It provides agents with text-formatted information through the following methods: + +- **Files:** Retrieve, create, update, or delete file content. +- **Branches:** List, get, create, or delete branches. +- **Issues:** List, get, create, update, close, and comment on issues. +- **Pull Requests:** List, get, create, update, merge, close, and comment on pull requests. + +## To be Implemented + +- **Discussions:** List, get, create, delete discussions, and add messages. From 3343cea8f3d97e9b5b3f15350fbe3f267b4830fa Mon Sep 17 00:00:00 2001 From: RomanGoEmpire <71276897+RomanGoEmpire@users.noreply.github.com> Date: Sun, 12 Nov 2023 22:49:38 +0100 Subject: [PATCH 085/141] created README.md --- shared/github_communication/README.md | 12 ++++++++++++ 1 file changed, 12 insertions(+) create mode 100644 shared/github_communication/README.md diff --git a/shared/github_communication/README.md b/shared/github_communication/README.md new file mode 100644 index 0000000..6183d96 --- /dev/null +++ b/shared/github_communication/README.md @@ -0,0 +1,12 @@ +# GitHub API Wrapper + +This tool, leveraging [PyGithub](https://github.com/PyGithub/PyGithub), enables communication with GitHub. It provides agents with text-formatted information through the following methods: + +- **Files:** Retrieve, create, update, or delete file content. +- **Branches:** List, get, create, or delete branches. +- **Issues:** List, get, create, update, close, and comment on issues. +- **Pull Requests:** List, get, create, update, merge, close, and comment on pull requests. + +## To be Implemented + +- **Discussions:** List, get, create, delete discussions, and add messages. From 2bbda36822d165fca0d855b9fd761b82d1200414 Mon Sep 17 00:00:00 2001 From: RomanGoEmpire <71276897+RomanGoEmpire@users.noreply.github.com> Date: Sun, 12 Nov 2023 22:50:45 +0100 Subject: [PATCH 086/141] Refactor get_pull_requests method to return only pull request numbers. --- shared/github_communication/github_api_wrapper.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/shared/github_communication/github_api_wrapper.py b/shared/github_communication/github_api_wrapper.py index afe230e..3da06fa 100644 --- a/shared/github_communication/github_api_wrapper.py +++ b/shared/github_communication/github_api_wrapper.py @@ -193,7 +193,8 @@ def get_pull_requests(self, state: str = "open") -> list[requests.Response]: List of numbers of pull requests. """ pull_requests = self.repository.get_pulls(state=state) - return pull_requests + pull_requests_numbers = [pull_request.number for pull_request in pull_requests] + return pull_requests_numbers def create_pull_request( self, From d1411f42d3035f06202b51f50f22aa2ef726cf61 Mon Sep 17 00:00:00 2001 From: RomanGoEmpire <71276897+RomanGoEmpire@users.noreply.github.com> Date: Sun, 12 Nov 2023 22:50:45 +0100 Subject: [PATCH 087/141] Refactor get_pull_requests method to return only pull request numbers. --- shared/github_communication/github_api_wrapper.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/shared/github_communication/github_api_wrapper.py b/shared/github_communication/github_api_wrapper.py index afe230e..3da06fa 100644 --- a/shared/github_communication/github_api_wrapper.py +++ b/shared/github_communication/github_api_wrapper.py @@ -193,7 +193,8 @@ def get_pull_requests(self, state: str = "open") -> list[requests.Response]: List of numbers of pull requests. """ pull_requests = self.repository.get_pulls(state=state) - return pull_requests + pull_requests_numbers = [pull_request.number for pull_request in pull_requests] + return pull_requests_numbers def create_pull_request( self, From 3eebafcfd0904b0248c50926e2f119152658b2d3 Mon Sep 17 00:00:00 2001 From: RomanGoEmpire <71276897+RomanGoEmpire@users.noreply.github.com> Date: Sun, 12 Nov 2023 23:24:07 +0100 Subject: [PATCH 088/141] cleaned up jupyter notebook and updated readme --- shared/github_communication/README.md | 13 ++++++++- ..._test.ipynb => github_api_test_area.ipynb} | 29 +++++++++---------- 2 files changed, 26 insertions(+), 16 deletions(-) rename shared/github_communication/{githup_api_test.ipynb => github_api_test_area.ipynb} (91%) diff --git a/shared/github_communication/README.md b/shared/github_communication/README.md index 6183d96..a2a44db 100644 --- a/shared/github_communication/README.md +++ b/shared/github_communication/README.md @@ -2,11 +2,22 @@ This tool, leveraging [PyGithub](https://github.com/PyGithub/PyGithub), enables communication with GitHub. It provides agents with text-formatted information through the following methods: -- **Files:** Retrieve, create, update, or delete file content. +- **Files:** Retrieve, create, update, delete file content or get List of file paths. - **Branches:** List, get, create, or delete branches. - **Issues:** List, get, create, update, close, and comment on issues. - **Pull Requests:** List, get, create, update, merge, close, and comment on pull requests. + +## How to start + +1. Create a **Github authorization token!** ([Youtube explanation](https://www.youtube.com/shorts/rlO6C6dDKNs?feature=share)) + +2. Check out the github_api_test_area.ipynb (Jupyter notebook)to get familiar with it + + 1. Insert your authorization token, username and repository name + + + ## To be Implemented - **Discussions:** List, get, create, delete discussions, and add messages. diff --git a/shared/github_communication/githup_api_test.ipynb b/shared/github_communication/github_api_test_area.ipynb similarity index 91% rename from shared/github_communication/githup_api_test.ipynb rename to shared/github_communication/github_api_test_area.ipynb index 25e13ed..5c559d0 100644 --- a/shared/github_communication/githup_api_test.ipynb +++ b/shared/github_communication/github_api_test_area.ipynb @@ -7,7 +7,7 @@ "outputs": [], "source": [ "import os\n", - "from github_api_wrapper import GithubAPIWrapper\n" + "from github_api_wrapper import GithubAPIWrapper" ] }, { @@ -16,9 +16,9 @@ "metadata": {}, "outputs": [], "source": [ - "GITHUB_TOKEN = os.getenv(\"GITHUB_TOKEN\")\n", - "USER_NAME = # your github user name\n", - "REPOSITORY_NAME = # your repository name\n", + "GITHUB_TOKEN = os.getenv(\"GITHUB_TOKEN\") # or just paste your token here\n", + "USER_NAME = None# your github user name\n", + "REPOSITORY_NAME = None # your repository name\n", "g = GithubAPIWrapper(GITHUB_TOKEN, f\"{USER_NAME}/{REPOSITORY_NAME}\")" ] }, @@ -53,7 +53,7 @@ } ], "source": [ - "print(g.get_file_content(\"README.md\", \"main\"))\n" + "print(g.get_file_content(\"README.md\", \"main\"))" ] }, { @@ -71,7 +71,7 @@ ], "source": [ "\n", - "print(g.get_branches())\n" + "print(g.get_branches())" ] }, { @@ -88,7 +88,7 @@ } ], "source": [ - "print(g.create_branch(\"test\"))\n" + "print(g.create_branch(\"test\"))" ] }, { @@ -105,7 +105,7 @@ } ], "source": [ - "print(g.create_file(\"test.txt\", \"test\", \"add test.txt\", \"test\"))\n" + "print(g.create_file(\"test.txt\", \"test\", \"add test.txt\", \"test\"))" ] }, { @@ -122,7 +122,7 @@ } ], "source": [ - "print(g.update_file(\"test.txt\", \"test2\", \"update test.txt\", \"test\"))\n" + "print(g.update_file(\"test.txt\", \"test2\", \"update test.txt\", \"test\"))" ] }, { @@ -139,7 +139,7 @@ } ], "source": [ - "print(g.delete_file(\"test.txt\", \"delete test.txt\", \"test\"))\n" + "print(g.delete_file(\"test.txt\", \"delete test.txt\", \"test\"))" ] }, { @@ -173,7 +173,7 @@ } ], "source": [ - "print(g.get_pull_requests())\n" + "print(g.get_pull_requests())" ] }, { @@ -208,7 +208,7 @@ ], "source": [ "\n", - "print(g.merge_pull_request(4))\n" + "print(g.merge_pull_request(4))" ] }, { @@ -225,7 +225,6 @@ } ], "source": [ - "\n", "print(g.get_issues())" ] }, @@ -277,7 +276,7 @@ } ], "source": [ - "print(g.update_issue(2, \"Updated by API\", \"Updated by API\"))\n" + "print(g.update_issue(2, \"Updated by API\", \"Updated by API\"))" ] }, { @@ -294,7 +293,7 @@ } ], "source": [ - "print(g.close_issue(2))\n" + "print(g.close_issue(2))" ] } ], From be0ecfa17cd277cab46fd45b210bfe81b3063be3 Mon Sep 17 00:00:00 2001 From: RomanGoEmpire <71276897+RomanGoEmpire@users.noreply.github.com> Date: Sun, 12 Nov 2023 23:24:07 +0100 Subject: [PATCH 089/141] cleaned up jupyter notebook and updated readme --- shared/github_communication/README.md | 13 ++++++++- ..._test.ipynb => github_api_test_area.ipynb} | 29 +++++++++---------- 2 files changed, 26 insertions(+), 16 deletions(-) rename shared/github_communication/{githup_api_test.ipynb => github_api_test_area.ipynb} (91%) diff --git a/shared/github_communication/README.md b/shared/github_communication/README.md index 6183d96..a2a44db 100644 --- a/shared/github_communication/README.md +++ b/shared/github_communication/README.md @@ -2,11 +2,22 @@ This tool, leveraging [PyGithub](https://github.com/PyGithub/PyGithub), enables communication with GitHub. It provides agents with text-formatted information through the following methods: -- **Files:** Retrieve, create, update, or delete file content. +- **Files:** Retrieve, create, update, delete file content or get List of file paths. - **Branches:** List, get, create, or delete branches. - **Issues:** List, get, create, update, close, and comment on issues. - **Pull Requests:** List, get, create, update, merge, close, and comment on pull requests. + +## How to start + +1. Create a **Github authorization token!** ([Youtube explanation](https://www.youtube.com/shorts/rlO6C6dDKNs?feature=share)) + +2. Check out the github_api_test_area.ipynb (Jupyter notebook)to get familiar with it + + 1. Insert your authorization token, username and repository name + + + ## To be Implemented - **Discussions:** List, get, create, delete discussions, and add messages. diff --git a/shared/github_communication/githup_api_test.ipynb b/shared/github_communication/github_api_test_area.ipynb similarity index 91% rename from shared/github_communication/githup_api_test.ipynb rename to shared/github_communication/github_api_test_area.ipynb index 25e13ed..5c559d0 100644 --- a/shared/github_communication/githup_api_test.ipynb +++ b/shared/github_communication/github_api_test_area.ipynb @@ -7,7 +7,7 @@ "outputs": [], "source": [ "import os\n", - "from github_api_wrapper import GithubAPIWrapper\n" + "from github_api_wrapper import GithubAPIWrapper" ] }, { @@ -16,9 +16,9 @@ "metadata": {}, "outputs": [], "source": [ - "GITHUB_TOKEN = os.getenv(\"GITHUB_TOKEN\")\n", - "USER_NAME = # your github user name\n", - "REPOSITORY_NAME = # your repository name\n", + "GITHUB_TOKEN = os.getenv(\"GITHUB_TOKEN\") # or just paste your token here\n", + "USER_NAME = None# your github user name\n", + "REPOSITORY_NAME = None # your repository name\n", "g = GithubAPIWrapper(GITHUB_TOKEN, f\"{USER_NAME}/{REPOSITORY_NAME}\")" ] }, @@ -53,7 +53,7 @@ } ], "source": [ - "print(g.get_file_content(\"README.md\", \"main\"))\n" + "print(g.get_file_content(\"README.md\", \"main\"))" ] }, { @@ -71,7 +71,7 @@ ], "source": [ "\n", - "print(g.get_branches())\n" + "print(g.get_branches())" ] }, { @@ -88,7 +88,7 @@ } ], "source": [ - "print(g.create_branch(\"test\"))\n" + "print(g.create_branch(\"test\"))" ] }, { @@ -105,7 +105,7 @@ } ], "source": [ - "print(g.create_file(\"test.txt\", \"test\", \"add test.txt\", \"test\"))\n" + "print(g.create_file(\"test.txt\", \"test\", \"add test.txt\", \"test\"))" ] }, { @@ -122,7 +122,7 @@ } ], "source": [ - "print(g.update_file(\"test.txt\", \"test2\", \"update test.txt\", \"test\"))\n" + "print(g.update_file(\"test.txt\", \"test2\", \"update test.txt\", \"test\"))" ] }, { @@ -139,7 +139,7 @@ } ], "source": [ - "print(g.delete_file(\"test.txt\", \"delete test.txt\", \"test\"))\n" + "print(g.delete_file(\"test.txt\", \"delete test.txt\", \"test\"))" ] }, { @@ -173,7 +173,7 @@ } ], "source": [ - "print(g.get_pull_requests())\n" + "print(g.get_pull_requests())" ] }, { @@ -208,7 +208,7 @@ ], "source": [ "\n", - "print(g.merge_pull_request(4))\n" + "print(g.merge_pull_request(4))" ] }, { @@ -225,7 +225,6 @@ } ], "source": [ - "\n", "print(g.get_issues())" ] }, @@ -277,7 +276,7 @@ } ], "source": [ - "print(g.update_issue(2, \"Updated by API\", \"Updated by API\"))\n" + "print(g.update_issue(2, \"Updated by API\", \"Updated by API\"))" ] }, { @@ -294,7 +293,7 @@ } ], "source": [ - "print(g.close_issue(2))\n" + "print(g.close_issue(2))" ] } ], From 116e01cafbebb9dae9e6f809025cf9d790355f88 Mon Sep 17 00:00:00 2001 From: FireMMDC Date: Sun, 12 Nov 2023 19:59:06 -0500 Subject: [PATCH 090/141] Moving some agent creation call outside of the tool_maker folder and into the agent_builder folder --- .../temporary_function_writer/instructions.md | 17 ++ .../temporary_function_writer/settings.json | 7 + .../agents/tool_creator/instructions.md | 34 +++ .../agents/tool_creator/settings.json | 33 +++ agents/agent_builder/create.py | 238 +++++++++--------- agents/tool_maker/assistant_manager.py | 58 +---- agents/tool_maker/tool_creator_metadata.json | 3 - 7 files changed, 222 insertions(+), 168 deletions(-) create mode 100644 agents/agent_builder/agents/temporary_function_writer/instructions.md create mode 100644 agents/agent_builder/agents/temporary_function_writer/settings.json create mode 100644 agents/agent_builder/agents/tool_creator/instructions.md create mode 100644 agents/agent_builder/agents/tool_creator/settings.json diff --git a/agents/agent_builder/agents/temporary_function_writer/instructions.md b/agents/agent_builder/agents/temporary_function_writer/instructions.md new file mode 100644 index 0000000..39f36ae --- /dev/null +++ b/agents/agent_builder/agents/temporary_function_writer/instructions.md @@ -0,0 +1,17 @@ +# Mission +- You will be provided JSON schema of an OpenAI function tool from an API and not a human user +- The JSON will contain all information about the function and you will need to translate it into a python function. + +# Background info +None + +# Rules +- Return only the python function you have written +- You must always implement the function with actual code +- Do not write any additional text as you are talking to an API, extraneous output will cause execution errors. +- The function should not contain generic placeholders of pseudo code, as it will beark the API +- If clarification is needed to write functioning code, request additional info as arguments without creating a real function or valid schema + +# Instructions +- Attempt to convert provided JSON to a fully functiong python function +- If you are unable to preform this transalation return request for additional info as arguments \ No newline at end of file diff --git a/agents/agent_builder/agents/temporary_function_writer/settings.json b/agents/agent_builder/agents/temporary_function_writer/settings.json new file mode 100644 index 0000000..51372f0 --- /dev/null +++ b/agents/agent_builder/agents/temporary_function_writer/settings.json @@ -0,0 +1,7 @@ +{ + "model": "gpt-4-1106-preview", + "description": "Foundation Swarm Builder", + "tools": [{ "type": "code_interpreter" }], + "metadata": {} + } + \ No newline at end of file diff --git a/agents/agent_builder/agents/tool_creator/instructions.md b/agents/agent_builder/agents/tool_creator/instructions.md new file mode 100644 index 0000000..8489d84 --- /dev/null +++ b/agents/agent_builder/agents/tool_creator/instructions.md @@ -0,0 +1,34 @@ +# Mission +- Transcript a user's request into a valid schema to represent a valid function call + +# Background info +None + +# Rules +- Always check the provided files to ground your thoughts. +- If a term can have multiple meanings, always prefer those mentioned in the provided documents. + +# Instructions +- Initialize: Prepare to receive input for the creation of a new function using the request_function tool. +- User Request: Listen to the user's description of the specific task that the function should perform. +- Function Name: + a. Derived from the task description, formulate a concise and descriptive function name. + b. Aim for clarity and specificity to convey the function's purpose effectively. +- Function Description: + a. Write a clear and precise description of the function's expected behavior. + b. Include details about what the function will accomplish and any side effects. + c. (Emphasize) Ensure that the description explicitly communicates the function's intended outcome to avoid ambiguity. +- Input Arguments JSON Schema: + a. Based on the requirements of the task, define the JSON schema for the input arguments. + b. The schema should be comprehensive and must specify the types, required fields, and constraints for each input argument. + c. Ensure that the schema aligns with the user's requirements and the function's intended behavior. +- Validation: Cross-check the name, description, and JSON schema against the user's requirements to confirm accuracy and completeness. +- Execution: Utilize the request_function tool with the following + inputs: + name: [Function Name] + descriptions: [Function Description] + input_argument_json_schema: [Input Arguments JSON Schema] +- Feedback Loop: Promptly present the newly created function specifications to the user for any feedback or necessary revisions. +- Iterate: Make adjustments as requested by the user, refining the function name, description, and input argument schema until it meets the user's satisfaction. +Finalize: Once the user gives approval, consider the function creation process complete. +- Note: Remember to prioritize user requirements and emphasize clear communication in the function description, as highlighted by the user. \ No newline at end of file diff --git a/agents/agent_builder/agents/tool_creator/settings.json b/agents/agent_builder/agents/tool_creator/settings.json new file mode 100644 index 0000000..8b903e4 --- /dev/null +++ b/agents/agent_builder/agents/tool_creator/settings.json @@ -0,0 +1,33 @@ +{ + "model": "gpt-4-1106-preview", + "description": "assistant to demonstrate tool creation", + "tools": [ + { "type": "code_interpreter" }, + { + "type": "function", + "function": { + "name": "function_request", + "description": "request an authority to grant you access to a new function", + "parameters": { + "type": "object", + "properties": { + "name": { + "type": "string", + "description": "name of the function" + }, + "description": { + "type": "string", + "description": "expected function behaviour" + }, + "schema": { + "type": "string", + "description": "the input arguments for the requested function following the JOSN schema in a format ready to be serialized" + } + }, + "required": ["name", "schema"] + } + } + } + ], + "metadata": {} +} diff --git a/agents/agent_builder/create.py b/agents/agent_builder/create.py index 14f350c..6e4745f 100644 --- a/agents/agent_builder/create.py +++ b/agents/agent_builder/create.py @@ -2,117 +2,131 @@ import json from shared.openai_config import get_openai_client -agents_path = 'agents' -client = get_openai_client() - -# Check if the 'agents' folder is empty or doesn't exist -if not os.path.exists(agents_path) or not os.path.isdir(agents_path) or not os.listdir(agents_path): - raise ValueError('The "agents" folder is missing, not a directory, or empty.') - -existing_assistants = {} - -for assistant in client.beta.assistants.list(limit=100): - existing_assistants[assistant.name] = assistant - - -# Iterate over each folder inside the 'agents' folder -for agent_name in os.listdir(agents_path): - agent_folder = os.path.join(agents_path, agent_name) - - existing_files = {} - requested_files = [] - existing_agent = {} - if agent_name in existing_assistants: - existing_agent = existing_assistants[agent_name] - for file_id in existing_agent.file_ids: - existing_file = client.files.retrieve(file_id=file_id) - existing_files[existing_file.filename] = existing_file - - - if os.path.isdir(agent_folder): - # Read contents from the 'instructions.md' file - instructions = '' - instructions_file_path = os.path.join(agent_folder, 'instructions.md') - if os.path.isfile(instructions_file_path): - with open(instructions_file_path, 'r') as f: - instructions = f.read() - - # Read contents from the 'settings.json' file - settings = {} - settings_file_path = os.path.join(agent_folder, 'settings.json') - if os.path.isfile(settings_file_path): - with open(settings_file_path, 'r') as f: - settings = json.load(f) - - # Check for the 'files' subfolder and process its contents - files = [] - files_folder = os.path.join(agent_folder, 'files') - if os.path.isdir(files_folder): - for filename in os.listdir(files_folder): - requested_files.append(filename) - # Doesn't handle if file has been modified - if filename not in existing_files: - file_path = os.path.join(files_folder, filename) - with open(file_path, 'rb') as file_data: - # Upload each file to OpenAI - file_object = client.files.create(file=file_data, purpose='assistants') - files.append({"name": filename, "id": file_object.id}) - - print(agent_name) - print("") - print(instructions) - if files: +def create_assistants(): + agents_path = 'agents' + client = get_openai_client() + + # Check if the 'agents' folder is empty or doesn't exist + if not os.path.exists(agents_path) or not os.path.isdir(agents_path) or not os.listdir(agents_path): + raise ValueError('The "agents" folder is missing, not a directory, or empty.') + + existing_assistants = {} + + for assistant in client.beta.assistants.list(limit=100): + existing_assistants[assistant.name] = assistant + + + # Iterate over each folder inside the 'agents' folder + for agent_name in os.listdir(agents_path): + agent_folder = os.path.join(agents_path, agent_name) + + existing_files = {} + requested_files = [] + existing_agent = {} + if agent_name in existing_assistants: + existing_agent = existing_assistants[agent_name] + for file_id in existing_agent.file_ids: + existing_file = client.files.retrieve(file_id=file_id) + existing_files[existing_file.filename] = existing_file + + + if os.path.isdir(agent_folder): + # Read contents from the 'instructions.md' file + instructions = '' + instructions_file_path = os.path.join(agent_folder, 'instructions.md') + if os.path.isfile(instructions_file_path): + with open(instructions_file_path, 'r') as f: + instructions = f.read() + + # Read contents from the 'settings.json' file + settings = {} + settings_file_path = os.path.join(agent_folder, 'settings.json') + if os.path.isfile(settings_file_path): + with open(settings_file_path, 'r') as f: + settings = json.load(f) + + # Check for the 'files' subfolder and process its contents + files = [] + files_folder = os.path.join(agent_folder, 'files') + if os.path.isdir(files_folder): + for filename in os.listdir(files_folder): + requested_files.append(filename) + # Doesn't handle if file has been modified + if filename not in existing_files: + file_path = os.path.join(files_folder, filename) + with open(file_path, 'rb') as file_data: + # Upload each file to OpenAI + file_object = client.files.create(file=file_data, purpose='assistants') + files.append({"name": filename, "id": file_object.id}) + + print(agent_name) print("") - print(f"Files: {list(map(lambda x: x['name'], files))}") - - assistant={} - - if existing_agent: - print(f"{agent_name} already exists... validating properties") - update_model = existing_agent.model != settings["model"] - update_instructions = existing_agent.instructions != instructions - #need to evaluate tools - - update_params = {} - - requested_files_set = set(requested_files) - existing_files_set = set(existing_files.keys()) - - if update_model: - update_params["model"] = settings["model"] - if update_instructions: - update_params["instructions"] = instructions - if files or requested_files_set != existing_files_set: - retained_set = existing_files_set.intersection(requested_files_set) - all_file_ids = [] - for key in retained_set: - all_file_ids.append(existing_files[key].id) - all_file_ids += list(map(lambda x: x['id'], files)) - update_params['file_ids'] = all_file_ids - if not any( tool.type == "retrieval" for tool in existing_agent.tools): - update_params['tools'] = existing_agent.tools - update_params['tools'].append({'type': 'retrieval'}) - - if len(update_params) != 0: - print(f"Updating {agent_name}'s { ','.join(update_params.keys()) }") - update_params['assistant_id'] = existing_agent.id - assistant = client.beta.assistants.update(**update_params) - else: - print(f"{agent_name} is up to date") - else: - - create_params = { - "name": agent_name, - "instructions": instructions, - "model": settings["model"], - "tools": settings["tools"] - } - - # Only include 'file_ids' if there are files + print(instructions) if files: - create_params['tools'].append({'type': 'retrieval'}) - create_params['file_ids'] = list(map(lambda x: x['id'], files)) - - # Create the assistant using the uploaded file IDs if files exist - assistant = client.beta.assistants.create(**create_params) - print("***********************************************") \ No newline at end of file + print("") + print(f"Files: {list(map(lambda x: x['name'], files))}") + + assistant={} + + if existing_agent: + print(f"{agent_name} already exists... validating properties") + update_model = existing_agent.model != settings["model"] + update_description = existing_agent.description != settings["description"] + update_instructions = existing_agent.instructions != instructions + existing_agent_tools = list(filter(lambda item: item.type == "function", existing_agent.tools)) + setting_agent_tools = list(filter(lambda item: item["type"] == "function", settings["tools"])) + update_tools = existing_agent_tools != setting_agent_tools + + update_params = {} + + requested_files_set = set(requested_files) + existing_files_set = set(existing_files.keys()) + + if update_model: + update_params["model"] = settings["model"] + if update_instructions: + update_params["instructions"] = instructions + if update_description: + update_params["description"] = settings["description"] + if files or requested_files_set != existing_files_set: + retained_set = existing_files_set.intersection(requested_files_set) + all_file_ids = [] + for key in retained_set: + all_file_ids.append(existing_files[key].id) + all_file_ids += list(map(lambda x: x['id'], files)) + update_params['file_ids'] = all_file_ids + if not any( tool.type == "retrieval" for tool in existing_agent.tools): + update_params['tools'] = existing_agent.tools + update_params['tools'].append({'type': 'retrieval'}) + if update_tools: + update_params['tools'] = settings["tools"] + if len(requested_files) > 0: + update_params['tools'].append({'type': 'retrieval'}) + + if len(update_params) != 0: + print(f"Updating {agent_name}'s { ','.join(update_params.keys()) }") + update_params['assistant_id'] = existing_agent.id + assistant = client.beta.assistants.update(**update_params) + else: + print(f"{agent_name} is up to date") + else: + + create_params = { + "name": agent_name, + "instructions": instructions, + "description": settings["description"], + "model": settings["model"], + "tools": settings["tools"] + } + + # Only include 'file_ids' if there are files + if files: + create_params['tools'].append({'type': 'retrieval'}) + create_params['file_ids'] = list(map(lambda x: x['id'], files)) + + # Create the assistant using the uploaded file IDs if files exist + assistant = client.beta.assistants.create(**create_params) + print("***********************************************") + +if __name__ == '__main__': + create_assistants() \ No newline at end of file diff --git a/agents/tool_maker/assistant_manager.py b/agents/tool_maker/assistant_manager.py index a7d7544..5291df4 100644 --- a/agents/tool_maker/assistant_manager.py +++ b/agents/tool_maker/assistant_manager.py @@ -2,36 +2,12 @@ from pathlib import Path import os import json - +from agents.agent_builder.create import create_assistants class AssistantManager: - request_function_tool = r"""{ - "name": "function_request", - "description": "request an authority to grant you access to a new function", - "parameters": { - "type": "object", - "properties": { - "name": { - "type": "string", - "description": "name of the function" - }, - "description": { - "type": "string", - "description": "expected function behaviour" - }, - "schema": { - "type": "string", - "description": "the input arguments for the requested function following the JOSN schema in a format ready to be serialized" - } - }, - "required": [ - "name", - "schema" - ] - } - }""" def __init__(self, client): + create_assistants() self.client = client self.assistant = None Path(__file__).absolute().parent @@ -46,7 +22,7 @@ def get_assistant(self): if not self.assistant_package["name"] in [ assistant.name for assistant in self.client.beta.assistants.list() ]: - assistant = self.make_tool_creation_assistant() + raise ValueError(f'{self.assistant_package["name"]} needs to be created using create.py in /agents/agent_builder/') else: assistant_dict = { assistant.name: assistant.id @@ -64,7 +40,7 @@ def get_coding_assistant(self): if not name in [ assistant.name for assistant in self.client.beta.assistants.list() ]: - assistant = self.make_coding_assistant() + raise ValueError(f'{name} needs to be created using create.py in /agents/agent_builder/') else: assistant_dict = { assistant.name: assistant.id @@ -76,32 +52,8 @@ def get_coding_assistant(self): self.assistant = assistant return assistant - def make_tool_creation_assistant(self): - tools = [ - ToolManager.tool_from_function_schema( - json.loads(AssistantManager.request_function_tool) - ) - ] - assistant = self.client.beta.assistants.create( - model=self.assistant_package["model"], - description=self.assistant_package["description"], - instructions=self.assistant_package["instructions"], - name=self.assistant_package["name"], - tools=tools, - ) - return assistant - - def make_coding_assistant(self): - code_assistant = self.client.beta.assistants.create( - model="gpt-4-1106-preview", - instructions="you will be provided a json schema of an OpenAI function tool from an API not a human user. The json will contain all information about the function you will need to write it in python code. You will return only the python function you wrote and no additional text as you are talking to an API and extraneous output will cause execution errors. You must always implement the actual code. Generic placeholders or pseudo code will break the api. If you need clarification to write real functioning code, request for extra info in arguments without creating a real function or valid schema", - name="temporary_function_writer", - ) - return code_assistant - - if __name__ == "__main__": - from shared.openai_config import get_openai_client + from shared.openai_config import get_openai_client client = get_openai_client() assistant_manager = AssistantManager(client=client) diff --git a/agents/tool_maker/tool_creator_metadata.json b/agents/tool_maker/tool_creator_metadata.json index 2573eb1..5b0e99c 100644 --- a/agents/tool_maker/tool_creator_metadata.json +++ b/agents/tool_maker/tool_creator_metadata.json @@ -1,6 +1,3 @@ { - "model": "gpt-4-1106-preview", - "description": "assistant to demonstrate tool creation", - "instructions": "Instruction Set for Assistant-to-be-Tool_Creator: Initialize: Prepare to receive input for the creation of a new function using the request_function tool.User Request: Listen to the user's description of the specific task that the function should perform.Function Name: a. Derived from the task description, formulate a concise and descriptive function name. b. Aim for clarity and specificity to convey the function's purpose effectively.Function Description: a. Write a clear and precise description of the function's expected behavior. b. Include details about what the function will accomplish and any side effects. c. (Emphasize) Ensure that the description explicitly communicates the function's intended outcome to avoid ambiguity.Input Arguments JSON Schema: a. Based on the requirements of the task, define the JSON schema for the input arguments. b. The schema should be comprehensive and must specify the types, required fields, and constraints for each input argument. c. Ensure that the schema aligns with the user's requirements and the function's intended behavior.Validation: Cross-check the name, description, and JSON schema against the user's requirements to confirm accuracy and completeness.Execution: Utilize the request_function tool with the following inputs:name: [Function Name]descriptions: [Function Description]input_argument_json_schema: [Input Arguments JSON Schema]Feedback Loop: Promptly present the newly created function specifications to the user for any feedback or necessary revisions.Iterate: Make adjustments as requested by the user, refining the function name, description, and input argument schema until it meets the user's satisfaction.Finalize: Once the user gives approval, consider the function creation process complete.Note: Remember to prioritize user requirements and emphasize clear communication in the function description, as highlighted by the user.", "name": "tool_creator" } \ No newline at end of file From e080f4cebdd9715e0c19fdbb77820e3a83505cc0 Mon Sep 17 00:00:00 2001 From: FireMMDC Date: Sun, 12 Nov 2023 22:05:48 -0500 Subject: [PATCH 091/141] Transforming OpenAI function structures back into the JSON format for describing function tools --- agents/agent_builder/create.py | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/agents/agent_builder/create.py b/agents/agent_builder/create.py index 6e4745f..ababb4f 100644 --- a/agents/agent_builder/create.py +++ b/agents/agent_builder/create.py @@ -73,7 +73,8 @@ def create_assistants(): update_model = existing_agent.model != settings["model"] update_description = existing_agent.description != settings["description"] update_instructions = existing_agent.instructions != instructions - existing_agent_tools = list(filter(lambda item: item.type == "function", existing_agent.tools)) + existing_agent_tools_raw = list(filter(lambda item: item.type == "function", existing_agent.tools)) + existing_agent_tools = [ ({ 'type': item.type, 'function': { 'name': item.function.name, 'description': item.function.description, 'parameters': item.function.parameters } }) for item in existing_agent_tools_raw ] setting_agent_tools = list(filter(lambda item: item["type"] == "function", settings["tools"])) update_tools = existing_agent_tools != setting_agent_tools From e5497ff40debad617e56523d0859068df71a1270 Mon Sep 17 00:00:00 2001 From: FireMMDC Date: Mon, 13 Nov 2023 00:51:54 -0500 Subject: [PATCH 092/141] Fixed function def in settings.json, attempting to make instructions more reliable needs more work. Adjusting create.py to deal with how tools are managed between systems, adding function to get existing functions to function writer --- .../temporary_function_writer/instructions.md | 1 + .../temporary_function_writer/settings.json | 22 +++++++--- agents/agent_builder/create.py | 5 ++- agents/tool_maker/chat_manager.py | 40 ++++++++++++++++++- 4 files changed, 59 insertions(+), 9 deletions(-) diff --git a/agents/agent_builder/agents/temporary_function_writer/instructions.md b/agents/agent_builder/agents/temporary_function_writer/instructions.md index 39f36ae..70c8b57 100644 --- a/agents/agent_builder/agents/temporary_function_writer/instructions.md +++ b/agents/agent_builder/agents/temporary_function_writer/instructions.md @@ -14,4 +14,5 @@ None # Instructions - Attempt to convert provided JSON to a fully functiong python function +- The function should be placed in a python code block ```python ... ``` - If you are unable to preform this transalation return request for additional info as arguments \ No newline at end of file diff --git a/agents/agent_builder/agents/temporary_function_writer/settings.json b/agents/agent_builder/agents/temporary_function_writer/settings.json index 51372f0..aaf546a 100644 --- a/agents/agent_builder/agents/temporary_function_writer/settings.json +++ b/agents/agent_builder/agents/temporary_function_writer/settings.json @@ -1,7 +1,17 @@ { - "model": "gpt-4-1106-preview", - "description": "Foundation Swarm Builder", - "tools": [{ "type": "code_interpreter" }], - "metadata": {} - } - \ No newline at end of file + "model": "gpt-4-1106-preview", + "description": "Foundation Swarm Builder", + "tools": [ + { "type": "code_interpreter" }, + { + "type": "function", + "function": { + "name": "get_existing_functions", + "description": "Provides functions that have already been built", + "parameters": { "type": "object", "properties": {} }, + "required": [] + } + } + ], + "metadata": {} +} diff --git a/agents/agent_builder/create.py b/agents/agent_builder/create.py index ababb4f..5cb79f3 100644 --- a/agents/agent_builder/create.py +++ b/agents/agent_builder/create.py @@ -1,9 +1,12 @@ import os import json +from pathlib import Path from shared.openai_config import get_openai_client def create_assistants(): - agents_path = 'agents' + agents_path = os.path.join( + Path(__file__).absolute().parent, "agents" + ) client = get_openai_client() # Check if the 'agents' folder is empty or doesn't exist diff --git a/agents/tool_maker/chat_manager.py b/agents/tool_maker/chat_manager.py index 2faac31..d2b5181 100644 --- a/agents/tool_maker/chat_manager.py +++ b/agents/tool_maker/chat_manager.py @@ -44,6 +44,17 @@ def run_python_from_function_name(self, call): "output": f"{{{type(error)}:{error.args}}}", } return response + + def get_existing_functions(self): + print("Get Built Functions") + results = [] + if os.path.exists(self.functions_path): + for filename in os.listdir(self.functions_path): + if filename.endswith(".json"): + file_path = os.path.join(self.functions_path,filename) + with open(file_path, "r") as file: + results.append(file) + return results def handle_fucntion_request( self, @@ -57,13 +68,14 @@ def handle_fucntion_request( # Create Function Tool schema = ToolManager.schema_from_response(call.function.arguments) tool = ToolManager.tool_from_function_schema(schema) + filtered_interface_assistant_tools = list(filter(lambda tool: tool.type == "function" ,interface_assistant.tools)) if tool["function"]["name"] in [ previous_tool.function.name - for previous_tool in interface_assistant.tools + for previous_tool in filtered_interface_assistant_tools ]: tools = [ previous_tool - for previous_tool in interface_assistant.tools + for previous_tool in filtered_interface_assistant_tools if previous_tool.function.name != tool["function"]["name"] ] interface_assistant = self.client.beta.assistants.update( @@ -95,6 +107,8 @@ def handle_fucntion_request( os.mkdir(self.functions_path) with open(f"{self.functions_path}/{name}.py", "w") as file: file.writelines(function_lines) + with open(f"{self.functions_path}/{name}.json", "w") as file: + file.writelines(tool) response = {"tool_call_id": call.id, "output": "{success}"} @@ -113,6 +127,28 @@ def simple_run(self, run, thread): run = self.client.beta.threads.runs.retrieve( run_id=run.id, thread_id=thread.id ) + if run.status == "requires_action": + responses = [] + for call in run.required_action.submit_tool_outputs.tool_calls: + print(f"calling: {call.function.name}") + if call.function.name == "get_existing_functions": + available_functions = self.get_existing_functions() + response = {"tool_call_id": call.id, "output": f"result:{available_functions}"} + responses.append(response) + else: + response = {"tool_call_id": call.id, "output": f"result:None"} + responses.append(response) + try: + run = self.client.beta.threads.runs.submit_tool_outputs( + run_id=run.id, + thread_id=thread.id, + tool_outputs=responses, + ) + except: + print(run.status) + print(run) + print(call) + print(responses) response = ( self.client.beta.threads.messages.list(thread_id=thread.id) From 5ce33b970bd21de61a7dcae2cf4f9df052c804b4 Mon Sep 17 00:00:00 2001 From: David Shapiro Date: Mon, 13 Nov 2023 13:14:45 -0500 Subject: [PATCH 093/141] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 0a80005..450f6f7 100644 --- a/README.md +++ b/README.md @@ -27,7 +27,7 @@ Exclusively use cutting edge stuff, like OpenAI's latest Agents endpoint. For ex Fully autonomous swarms are the goal. That means a human does not need to be in the loop telling it what to do, supervising, or anything. Characteristics of a fully autonomous swarm: -1. **Principle or Mission Driven:** Once instantiated, the swarm pursues its mission or goals without supervision. It may self-direct based on principles such as the heuristic imperatives. This is the "self-directed" maxim. +1. **Self-Directing:** Once instantiated, the swarm pursues its mission or goals without supervision. It may self-direct based on principles such as the heuristic imperatives, or by specific mission parameters. 2. **Self-Correcting:** The swarm must detect and correct technical, strategic, epistemic, and other errors and then correct them. 3. **Self-Improving:** Eventually, the swarm should enhance its own fundamental capabilities over time. From 3a4016d7b7979d49d3b20f157aa7e1eb548bfd68 Mon Sep 17 00:00:00 2001 From: David Shapiro Date: Mon, 13 Nov 2023 13:14:45 -0500 Subject: [PATCH 094/141] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 0a80005..450f6f7 100644 --- a/README.md +++ b/README.md @@ -27,7 +27,7 @@ Exclusively use cutting edge stuff, like OpenAI's latest Agents endpoint. For ex Fully autonomous swarms are the goal. That means a human does not need to be in the loop telling it what to do, supervising, or anything. Characteristics of a fully autonomous swarm: -1. **Principle or Mission Driven:** Once instantiated, the swarm pursues its mission or goals without supervision. It may self-direct based on principles such as the heuristic imperatives. This is the "self-directed" maxim. +1. **Self-Directing:** Once instantiated, the swarm pursues its mission or goals without supervision. It may self-direct based on principles such as the heuristic imperatives, or by specific mission parameters. 2. **Self-Correcting:** The swarm must detect and correct technical, strategic, epistemic, and other errors and then correct them. 3. **Self-Improving:** Eventually, the swarm should enhance its own fundamental capabilities over time. From 58de989b0aca4cc9b9c1741f102fd0bc2d72ed87 Mon Sep 17 00:00:00 2001 From: FireMMDC Date: Mon, 13 Nov 2023 23:45:24 -0500 Subject: [PATCH 095/141] Refining temporary_function_writer instructions, removed additional instructions from call to temporary_function_writer's thread, storing requested function scheme in functions_path --- .../temporary_function_writer/instructions.md | 16 +++++++--------- .../temporary_function_writer/settings.json | 4 ++-- agents/tool_maker/chat_manager.py | 4 ++-- 3 files changed, 11 insertions(+), 13 deletions(-) diff --git a/agents/agent_builder/agents/temporary_function_writer/instructions.md b/agents/agent_builder/agents/temporary_function_writer/instructions.md index 70c8b57..887de9d 100644 --- a/agents/agent_builder/agents/temporary_function_writer/instructions.md +++ b/agents/agent_builder/agents/temporary_function_writer/instructions.md @@ -3,16 +3,14 @@ - The JSON will contain all information about the function and you will need to translate it into a python function. # Background info -None +- None # Rules -- Return only the python function you have written -- You must always implement the function with actual code -- Do not write any additional text as you are talking to an API, extraneous output will cause execution errors. -- The function should not contain generic placeholders of pseudo code, as it will beark the API -- If clarification is needed to write functioning code, request additional info as arguments without creating a real function or valid schema +- Your reseponse must only be a python markdown block containing the translated function +- The function must be fully implemented in python +- The function should not contain pseudo code or placeholders # Instructions -- Attempt to convert provided JSON to a fully functiong python function -- The function should be placed in a python code block ```python ... ``` -- If you are unable to preform this transalation return request for additional info as arguments \ No newline at end of file +- Translate the JSON to a fully functiong python function +- Return the python function in a markdown block +- If you are unable to preform this transalation return reply asking for additional info as arguments \ No newline at end of file diff --git a/agents/agent_builder/agents/temporary_function_writer/settings.json b/agents/agent_builder/agents/temporary_function_writer/settings.json index aaf546a..94215df 100644 --- a/agents/agent_builder/agents/temporary_function_writer/settings.json +++ b/agents/agent_builder/agents/temporary_function_writer/settings.json @@ -1,13 +1,13 @@ { "model": "gpt-4-1106-preview", - "description": "Foundation Swarm Builder", + "description": "Function writer", "tools": [ { "type": "code_interpreter" }, { "type": "function", "function": { "name": "get_existing_functions", - "description": "Provides functions that have already been built", + "description": "List available functions that can be called", "parameters": { "type": "object", "properties": {} }, "required": [] } diff --git a/agents/tool_maker/chat_manager.py b/agents/tool_maker/chat_manager.py index d2b5181..cf253aa 100644 --- a/agents/tool_maker/chat_manager.py +++ b/agents/tool_maker/chat_manager.py @@ -95,8 +95,8 @@ def handle_fucntion_request( functional_run = self.client.beta.threads.runs.create( thread_id=functional_thread.id, assistant_id=functional_assistant.id, - instructions="please remember you are talking to an API, minimize output text tokens for cost saving. Also realise that your output text must be directly runnable as a py file so begin with the def keyword and do not provide any text ouput which is not commented to avoid breaking the system. Target python 3 and windows", ) + functional_response = self.simple_run( run=functional_run, thread=functional_thread, @@ -108,7 +108,7 @@ def handle_fucntion_request( with open(f"{self.functions_path}/{name}.py", "w") as file: file.writelines(function_lines) with open(f"{self.functions_path}/{name}.json", "w") as file: - file.writelines(tool) + file.writelines(str(schema)) response = {"tool_call_id": call.id, "output": "{success}"} From 8c82bd273ce81931dfb4cc599f3f1a553015747b Mon Sep 17 00:00:00 2001 From: Georgia Phillips Date: Tue, 14 Nov 2023 13:13:43 -0700 Subject: [PATCH 096/141] Add thread locking --- agents/agent_functions/connect.py | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/agents/agent_functions/connect.py b/agents/agent_functions/connect.py index f3bd9a4..91dbe35 100644 --- a/agents/agent_functions/connect.py +++ b/agents/agent_functions/connect.py @@ -27,6 +27,8 @@ queues = {} channels = [] +lock = threading.Lock() + def processSendMessage(agent, outputs, action, arguments): if ('talksTo' in agent) and (arguments['recipient'] in agent['talksTo']): if arguments['recipient'] == "USER": @@ -116,6 +118,8 @@ def handleThreadForAgent(agent): if waitingForMessages: message = queue.get(block=True) if message is not None: + lock.acquire() + print(f"[{agent['name']}] ACQUIRES LOCK") waitingForMessages = False # print(f"[{agent['name']}] Recieved: {message}") messages.append(message) @@ -150,6 +154,9 @@ def handleThreadForAgent(agent): messages.append(retrievedMessage) print(f"[{agent['name']}] Message: {retrievedMessage}") i+=1 + if lock.locked(): + lock.release() + print(f"[{agent['name']}] RELEASES LOCK") elif run.status == 'requires_action': outputs = [] submitOutput=True @@ -178,6 +185,10 @@ def handleThreadForAgent(agent): run_id=run.id, tool_outputs=outputs ) + if lock.locked(): + lock.release() + print(f"[{agent['name']}] RELEASES LOCK") + time.sleep(1) def buildChannel(agent): From e3781b1043e90a81bb40e6b3bd694322ff23dfb7 Mon Sep 17 00:00:00 2001 From: Ruijian Zha Date: Tue, 14 Nov 2023 15:14:31 -0500 Subject: [PATCH 097/141] Update README.md --- agents/tool_maker/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/agents/tool_maker/README.md b/agents/tool_maker/README.md index c38e575..b9924d9 100644 --- a/agents/tool_maker/README.md +++ b/agents/tool_maker/README.md @@ -50,4 +50,4 @@ Note: Remember to prioritize user requirements and emphasize clear communication ``` ## Flowchart -![[imgs/flow.png]] +![Flow Diagram](imgs/flow.png) From 472d7a45a2860ea99c0c7504551edf50059ec855 Mon Sep 17 00:00:00 2001 From: Ruijian Zha Date: Tue, 14 Nov 2023 15:14:31 -0500 Subject: [PATCH 098/141] Update README.md --- agents/tool_maker/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/agents/tool_maker/README.md b/agents/tool_maker/README.md index c38e575..b9924d9 100644 --- a/agents/tool_maker/README.md +++ b/agents/tool_maker/README.md @@ -50,4 +50,4 @@ Note: Remember to prioritize user requirements and emphasize clear communication ``` ## Flowchart -![[imgs/flow.png]] +![Flow Diagram](imgs/flow.png) From e1c281cb08afedebfa9da2b2627a3e85a92a007b Mon Sep 17 00:00:00 2001 From: Starblaiz <52007944+Starblaiz@users.noreply.github.com> Date: Wed, 15 Nov 2023 01:59:48 +0000 Subject: [PATCH 099/141] Adding discord_comms.py code from Google Colab experimentation, as well as corresponding README.md documentation --- shared/discord_comms/README.md | 94 ++++++++++++++++++++++++++++++++++ 1 file changed, 94 insertions(+) create mode 100644 shared/discord_comms/README.md diff --git a/shared/discord_comms/README.md b/shared/discord_comms/README.md new file mode 100644 index 0000000..f1a5b96 --- /dev/null +++ b/shared/discord_comms/README.md @@ -0,0 +1,94 @@ +# Discord Comms Bot + +## Overview +This Discord bot is a sub-module of the larger HAAS system designed for creating and managing a swarm of AI agents. Within this system, various AI agents fulfill different roles — some provide ethics and oversight, others take on managerial responsibilities, and many serve as worker agents performing discrete tasks. + +The Discord bot's primary function is to facilitate communication among these AI agents on Discord. It allows the AI swarm to occupy a designated channel within a server, where they can carry out discussions and coordinate their actions efficiently. The bot enables the swarm to send messages and create threads on Discord, providing an organized platform for their complex interactions. + + +## Usage + +The AI agents interact on Discord by utilizing the `discord_send()` and `discord_create_thread()` functions. These functions are integral to the AI swarm's communication and self-organization within the Discord environment. + + +### Message Sending + +Agents can send messages to the designated Discord channel using the `discord_send(message: str, channel_id, pinned = False)` function. This function allows agents to post updates, commands, or any relevant information to the swarm. + +In order to have the function execute on the bot thread, it should be called using the `discord_thread_task(function)` function as follows: +``` +discord_thread_task(discord_send, message, channel_id, pinned) +``` + +### Message Reading + +Agents can read a specified number of messages in the designated Discord channel or thread using the `discord_get_messages(channel_id, num_messages)` function. This function allows agents to catch up with the present conversation and see the conversation history. + +In order to have the function execute on the bot thread, it should be called using the `discord_thread_task(function)` function as follows: +``` +discord_thread_task(discord_get_messages, channel_id, num_messages) +``` + + +### Thread Creation + +For more organized discussions or specific task delegations, agents can use the `discord_create_thread(thread_name: str, channel_id, public = False)` function to create threads in the Discord channel. This feature aids in segregating discussions based on topics or tasks, facilitating clearer and more focused communication among the agents. + +In order to have the function execute on the bot thread, it should be called using the `discord_thread_task(function)` function as follows: +``` +discord_thread_task(discord_create_thread,thread_name, channel_id, public) +``` + +### Channel and Thread IDs + +It should be noted that channel ID's and Thread IDs are interchangeable. You can use a thread ID in the `channel_id` parameter of `discord_send` for example to send a message to a specific thread instead of a channel. + +### Swarm Discussions + +The AI swarm, consisting of various types of agents, will use the Discord channel to carry out their discussions and planning. The bot's capabilities enable these AI agents to simulate a real-time, collaborative working environment, mirroring human team dynamics but on a digital platform. + + +## Creating and Inviting A Discord Bot + +1. **Creating a Bot** + - Go to the Discord Developer Portal. + - Click on the "New Application" button. + - Give your application a name and create it. + - In the application, navigate to the "Bot" tab and click "Add Bot". + - Here, you can find your bot's token. Keep it secure. + +1. **Inviting the Bot to Your Server** + - In the Developer Portal, navigate to the "OAuth2" tab. + - Under "Scopes", select "bot". + - Under "Bot Permissions", choose the permissions the bot needs: + - Send Messages + - Read Messages + - Create Public Threads + - Create Private Threads + - Copy the generated URL under "Scopes" and open it in your browser to invite the bot to your Discord server. + + +## Events and Commands +Some example events and commands are included to demonstrate how commands from the Discord channel are recieved in the bot. + +### !hello +Typing `!hello` in the Discord channel will trigger the bot to respond with `Hello!` + +### !hello2 +Typing `!hello2 "Text 1" "Text 2"` will trigger the bot to respond with `You said: "Text 1" and "Text 2"` + +### on_command_error() +Typing a command that isn't recognised or malformed will cause the bot to respond with an error. For example, simply typing `!hello2` with no parameters will cause the `on_command_error()` function to trigger, giving the response `An error occurred: text1 is a required argument that is missing.` + +## Dependancies +``` +!pip install discord.py +``` + +## TODO +- Further improve documentation +- Convert code to class for better encapsulation +- Investigate substantial (~30 second) delay between command being issued and things showing up in Discord +- Investigate issue where messages will sometimes go missing if they another command is issued too soon (may be related to the delay issue) +- Add more functionality to facilitate agent organisation once we have a clearer view of the kinds of patterns that will be needed +- Add more useful commands for the agents (or humans) to utilise in the Discord chat(s) From c6d7480239f377bbff1f896fbc92950a084e802e Mon Sep 17 00:00:00 2001 From: Starblaiz <52007944+Starblaiz@users.noreply.github.com> Date: Wed, 15 Nov 2023 02:19:57 +0000 Subject: [PATCH 100/141] Readding discord.py code due to an issue --- shared/discord_comms/discord_comms.py | 119 ++++++++++++++++++++++++++ 1 file changed, 119 insertions(+) create mode 100644 shared/discord_comms/discord_comms.py diff --git a/shared/discord_comms/discord_comms.py b/shared/discord_comms/discord_comms.py new file mode 100644 index 0000000..ba295a4 --- /dev/null +++ b/shared/discord_comms/discord_comms.py @@ -0,0 +1,119 @@ +# Dependancies +# !pip install discord.py + +# Import the necessary discord libraries +import discord +from discord.ext import commands +from discord import Intents + +# Token and Channel ID for the bot (Replace with actual values) +TOKEN = 'YOUR_BOT_TOKEN' # Replace with your bot token +CHANNEL_ID = 0 # Replace with your channel ID + +# Setting up bot intents for various functionalities +intents = Intents.default() +intents.messages = True +intents.message_content = True +intents.guilds = True + +# Initialize the bot with a command prefix and specified intents +bot = commands.Bot(command_prefix='!', intents=intents) + +# Event triggered when the bot is ready and connected +@bot.event +async def on_ready(): + await bot.wait_until_ready() # Ensure bot is fully connected + channel = bot.get_channel(CHANNEL_ID) + await channel.send('**Agent Bot Online**') # Send initial message to the channel + +# Error handling for commands +@bot.event +async def on_command_error(ctx, error): + if isinstance(error, commands.CommandNotFound): + await ctx.send("That command doesn't exist!") # Command not found error + else: + await ctx.send(f"An error occurred: {error}") # Other errors + +# Basic command to respond with "Hello!" +@bot.command() +async def hello(ctx): + await ctx.send("Hello!") + +# Command to echo two provided texts +@bot.command() +async def hello2(ctx, text1: str, text2: str): + await ctx.send(f"You said: {text1} and {text2}") + +# Command to create a thread from a message +@bot.command() +async def createthread(ctx, message_id: int, thread_name: str, duration_minutes: int = 0): + message = await ctx.channel.fetch_message(message_id) # Fetch the message + thread = await message.create_thread(name=thread_name, auto_archive_duration=duration_minutes) # Create thread + await thread.send(f"Thread '{thread_name}' created!") # Send confirmation message in thread + +# Necessary imports for asynchronous operation +import asyncio +import nest_asyncio +nest_asyncio.apply() + +import threading + +# Function to run the bot in a separate thread +def run_bot(): + loop = asyncio.new_event_loop() + asyncio.set_event_loop(loop) + bot.run(TOKEN) + +# Start the bot in a new thread +threading.Thread(target=run_bot).start() + +# Helper function to create tasks for discord bot +def discord_thread_task(func, *args): + bot.loop.create_task(func(*args)) + +# Async function to send a message to a specified channel +async def discord_send(message: str, channel_id, pinned=False): + channel = bot.get_channel(channel_id) + message_sent = await channel.send(message) + if pinned: + await message_sent.pin() # Pin the message if required + +messages = [] +channel_not_found = False + +# Async function to read a number of messages from a specified channel +async def discord_get_messages(channel_id, num_messages): + channel_not_found = False + # Get the channel or thread object + channel = bot.get_channel(channel_id) + if channel is None: + channel_not_found = True + print("Channel or thread not found.") + return + + # Retrieve the last 'num_messages' messages + async for message in channel.history(limit=num_messages): + messages.append(f"{message.author.display_name}: {message.content}") + + # Process the messages + # (For example, print them. You can modify this part as per your requirement) + print("\n".join(messages) if messages else "No messages found.") + +# Global variable to store thread IDs +thread_ids = {} + +# Async function to create a thread in a specified channel +async def discord_create_thread(thread_name: str, channel_id, public=False): + channel = bot.get_channel(channel_id) + thread = None + + if public: + # For public threads + message = await channel.send(f"Starting new thread: {thread_name}") + thread = await message.create_thread(name=thread_name, auto_archive_duration=0) + else: + # For private threads + thread = await channel.create_thread(name=thread_name, auto_archive_duration=0) + + await discord_send(f"Thread '{thread_name}' created!", thread.id, pinned=True) + thread_ids[thread_name] = thread.id # Store the thread ID From 9b1b977062ade45a2d9fe07fbf7c32fe1ca2f838 Mon Sep 17 00:00:00 2001 From: Starblaiz <52007944+Starblaiz@users.noreply.github.com> Date: Wed, 15 Nov 2023 02:25:30 +0000 Subject: [PATCH 101/141] Added extra TODO to README.md --- shared/discord_comms/README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/shared/discord_comms/README.md b/shared/discord_comms/README.md index f1a5b96..b4b3d9a 100644 --- a/shared/discord_comms/README.md +++ b/shared/discord_comms/README.md @@ -87,6 +87,7 @@ Typing a command that isn't recognised or malformed will cause the bot to respon ## TODO - Further improve documentation +- Move TOKEN and CHANNEL_ID to external config file or environment variable - Convert code to class for better encapsulation - Investigate substantial (~30 second) delay between command being issued and things showing up in Discord - Investigate issue where messages will sometimes go missing if they another command is issued too soon (may be related to the delay issue) From 564c084e281ba4fdcf6f743af5862903bee0ead7 Mon Sep 17 00:00:00 2001 From: Starblaiz <52007944+Starblaiz@users.noreply.github.com> Date: Wed, 15 Nov 2023 18:23:27 +0000 Subject: [PATCH 102/141] Converted code to a class for better encapsulation. Split the settings out into their own class. Added an example file. Updated documentation. --- shared/discord_comms/README.md | 43 ++-- shared/discord_comms/discord_comms.py | 215 +++++++++--------- shared/discord_comms/discord_comms_example.py | 79 +++++++ .../discord_comms/discord_comms_settings.py | 13 ++ 4 files changed, 223 insertions(+), 127 deletions(-) create mode 100644 shared/discord_comms/discord_comms_example.py create mode 100644 shared/discord_comms/discord_comms_settings.py diff --git a/shared/discord_comms/README.md b/shared/discord_comms/README.md index b4b3d9a..f263019 100644 --- a/shared/discord_comms/README.md +++ b/shared/discord_comms/README.md @@ -8,40 +8,40 @@ The Discord bot's primary function is to facilitate communication among these AI ## Usage -The AI agents interact on Discord by utilizing the `discord_send()` and `discord_create_thread()` functions. These functions are integral to the AI swarm's communication and self-organization within the Discord environment. +The AI agents interact on Discord by utilizing the `send()`, `get_messages`, and `create_thread()` methods. These methods are integral to the AI swarm's communication and self-organization within the Discord environment. ### Message Sending -Agents can send messages to the designated Discord channel using the `discord_send(message: str, channel_id, pinned = False)` function. This function allows agents to post updates, commands, or any relevant information to the swarm. +Agents can send messages to the designated Discord channel using the `send(message: str, channel_id, pinned = False)` method. This method allows agents to post updates, commands, or any relevant information to the swarm. -In order to have the function execute on the bot thread, it should be called using the `discord_thread_task(function)` function as follows: +In order to have the method execute on the bot thread, it should be called using the `thread_task(function, *args)` method as follows: ``` -discord_thread_task(discord_send, message, channel_id, pinned) +dc_comms.thread_task(dc_comms.send, message, channel_id, pinned) ``` ### Message Reading -Agents can read a specified number of messages in the designated Discord channel or thread using the `discord_get_messages(channel_id, num_messages)` function. This function allows agents to catch up with the present conversation and see the conversation history. +Agents can read a specified number of messages in the designated Discord channel or thread using the `get_messages(channel_id, num_messages)` method. This method allows agents to catch up with the present conversation and see the conversation history. -In order to have the function execute on the bot thread, it should be called using the `discord_thread_task(function)` function as follows: +In order to have the method execute on the bot thread, it should be called using the `thread_task(function, *args)` method as follows: ``` -discord_thread_task(discord_get_messages, channel_id, num_messages) +dc_comms.thread_task(dc_comms.get_messages, channel_id, num_messages) ``` ### Thread Creation -For more organized discussions or specific task delegations, agents can use the `discord_create_thread(thread_name: str, channel_id, public = False)` function to create threads in the Discord channel. This feature aids in segregating discussions based on topics or tasks, facilitating clearer and more focused communication among the agents. +For more organized discussions or specific task delegations, agents can use the `create_thread(thread_name: str, channel_id, public = False)` method to create threads in the Discord channel. This feature aids in segregating discussions based on topics or tasks, facilitating clearer and more focused communication among the agents. -In order to have the function execute on the bot thread, it should be called using the `discord_thread_task(function)` function as follows: +In order to have the method execute on the bot thread, it should be called using the `thread_task(function)` method as follows: ``` -discord_thread_task(discord_create_thread,thread_name, channel_id, public) +dc_comms.thread_task(dc_comms.create_thread,thread_name, channel_id, public) ``` ### Channel and Thread IDs -It should be noted that channel ID's and Thread IDs are interchangeable. You can use a thread ID in the `channel_id` parameter of `discord_send` for example to send a message to a specific thread instead of a channel. +It should be noted that channel ID's and Thread IDs are interchangeable. You can use a thread ID in the `channel_id` parameter of `send` for example to send a message to a specific thread instead of a channel. ### Swarm Discussions @@ -50,23 +50,38 @@ The AI swarm, consisting of various types of agents, will use the Discord channe ## Creating and Inviting A Discord Bot +For full documentation on creating and inviting Discord bots, see the following link: https://discord.com/developers/docs/getting-started + 1. **Creating a Bot** - Go to the Discord Developer Portal. - Click on the "New Application" button. - Give your application a name and create it. - In the application, navigate to the "Bot" tab and click "Add Bot". - - Here, you can find your bot's token. Keep it secure. + - Here, you can find your bot's token. Add this to the `self.token` setting in the DiscordCommsSettings class. 1. **Inviting the Bot to Your Server** - In the Developer Portal, navigate to the "OAuth2" tab. - Under "Scopes", select "bot". - Under "Bot Permissions", choose the permissions the bot needs: - Send Messages - - Read Messages + - Send Messages in Threads - Create Public Threads - Create Private Threads + - Embed Links + - Attach Files + - Add Reactions + - Mention @everyone, @here, and All Roles + - Manage Messages + - Manage Threads + - Read Message History + - Send Text-to-Speech Messages - Copy the generated URL under "Scopes" and open it in your browser to invite the bot to your Discord server. +1. **Create a Channel for the Bot** + - Go to your server and make a new channel for the bot / swarm to chat in + - Add the bot to the channel + - Go into the channel settings and copy the channel ID. Add this to the `self.channel_id` setting in the DiscordCommsSettings class + ## Events and Commands Some example events and commands are included to demonstrate how commands from the Discord channel are recieved in the bot. @@ -87,8 +102,6 @@ Typing a command that isn't recognised or malformed will cause the bot to respon ## TODO - Further improve documentation -- Move TOKEN and CHANNEL_ID to external config file or environment variable -- Convert code to class for better encapsulation - Investigate substantial (~30 second) delay between command being issued and things showing up in Discord - Investigate issue where messages will sometimes go missing if they another command is issued too soon (may be related to the delay issue) - Add more functionality to facilitate agent organisation once we have a clearer view of the kinds of patterns that will be needed diff --git a/shared/discord_comms/discord_comms.py b/shared/discord_comms/discord_comms.py index ba295a4..205ffc6 100644 --- a/shared/discord_comms/discord_comms.py +++ b/shared/discord_comms/discord_comms.py @@ -1,119 +1,110 @@ -# Dependancies -# !pip install discord.py - # Import the necessary discord libraries import discord from discord.ext import commands -from discord import Intents - -# Token and Channel ID for the bot (Replace with actual values) -TOKEN = 'YOUR_BOT_TOKEN' # Replace with your bot token -CHANNEL_ID = 0 # Replace with your channel ID - -# Setting up bot intents for various functionalities -intents = Intents.default() -intents.messages = True -intents.message_content = True -intents.guilds = True - -# Initialize the bot with a command prefix and specified intents -bot = commands.Bot(command_prefix='!', intents=intents) - -# Event triggered when the bot is ready and connected -@bot.event -async def on_ready(): - await bot.wait_until_ready() # Ensure bot is fully connected - channel = bot.get_channel(CHANNEL_ID) - await channel.send('**Agent Bot Online**') # Send initial message to the channel - -# Error handling for commands -@bot.event -async def on_command_error(ctx, error): - if isinstance(error, commands.CommandNotFound): - await ctx.send("That command doesn't exist!") # Command not found error - else: - await ctx.send(f"An error occurred: {error}") # Other errors - -# Basic command to respond with "Hello!" -@bot.command() -async def hello(ctx): - await ctx.send("Hello!") - -# Command to echo two provided texts -@bot.command() -async def hello2(ctx, text1: str, text2: str): - await ctx.send(f"You said: {text1} and {text2}") - -# Command to create a thread from a message -@bot.command() -async def createthread(ctx, message_id: int, thread_name: str, duration_minutes: int = 0): - message = await ctx.channel.fetch_message(message_id) # Fetch the message - thread = await message.create_thread(name=thread_name, auto_archive_duration=duration_minutes) # Create thread - await thread.send(f"Thread '{thread_name}' created!") # Send confirmation message in thread - -# Necessary imports for asynchronous operation import asyncio import nest_asyncio -nest_asyncio.apply() - import threading -# Function to run the bot in a separate thread -def run_bot(): - loop = asyncio.new_event_loop() - asyncio.set_event_loop(loop) - bot.run(TOKEN) - -# Start the bot in a new thread -threading.Thread(target=run_bot).start() - -# Helper function to create tasks for discord bot -def discord_thread_task(func, *args): - bot.loop.create_task(func(*args)) - -# Async function to send a message to a specified channel -async def discord_send(message: str, channel_id, pinned=False): - channel = bot.get_channel(channel_id) - message_sent = await channel.send(message) - if pinned: - await message_sent.pin() # Pin the message if required - -messages = [] -channel_not_found = False - -# Async function to read a number of messages from a specified channel -async def discord_get_messages(channel_id, num_messages): - channel_not_found = False - # Get the channel or thread object - channel = bot.get_channel(channel_id) - if channel is None: - channel_not_found = True - print("Channel or thread not found.") - return - - # Retrieve the last 'num_messages' messages - async for message in channel.history(limit=num_messages): - messages.append(f"{message.author.display_name}: {message.content}") - - # Process the messages - # (For example, print them. You can modify this part as per your requirement) - print("\n".join(messages) if messages else "No messages found.") - -# Global variable to store thread IDs -thread_ids = {} - -# Async function to create a thread in a specified channel -async def discord_create_thread(thread_name: str, channel_id, public=False): - channel = bot.get_channel(channel_id) - thread = None - - if public: - # For public threads - message = await channel.send(f"Starting new thread: {thread_name}") - thread = await message.create_thread(name=thread_name, auto_archive_duration=0) - else: - # For private threads - thread = await channel.create_thread(name=thread_name, auto_archive_duration=0) - - await discord_send(f"Thread '{thread_name}' created!", thread.id, pinned=True) - thread_ids[thread_name] = thread.id # Store the thread ID +class DiscordComms: + def __init__(self, token, intents, channel_id, command_prefix='!'): + # Token and Channel ID for the bot + self.TOKEN = token + self.CHANNEL_ID = channel_id + + # Global variable for tracking created thread ID's + self.thread_ids = {} + + # Global variable for storing retrieved messages + self.messages = [] + + # Setup the bot + self.bot = commands.Bot(command_prefix=command_prefix, intents=intents) + self._register_events() + + # Start the bot in a new thread + threading.Thread(target=self.run_bot).start() + + # Method for registering events that the bot can respond to + def _register_events(self): + # Event triggered when the bot is ready and connected + @self.bot.event + async def on_ready(): + await self.bot.wait_until_ready() + channel = self.bot.get_channel(self.CHANNEL_ID) + await channel.send('**Agent Bot Online**') + + # Error handling for commands + @self.bot.event + async def on_command_error(ctx, error): + if isinstance(error, commands.CommandNotFound): + await ctx.send("That command doesn't exist!") + else: + await ctx.send(f"An error occurred: {error}") + + # Basic command to respond with "Hello!" + @self.bot.command() + async def hello(ctx): + await ctx.send("Hello!") + + # Command to echo two provided texts + @self.bot.command() + async def hello2(ctx, text1: str, text2: str): + await ctx.send(f"You said: {text1} and {text2}") + + # Command to create a thread from a message + @self.bot.command() + async def createthread(ctx, message_id: int, thread_name: str, duration_minutes: int = 0): + message = await ctx.channel.fetch_message(message_id) + thread = await message.create_thread(name=thread_name, auto_archive_duration=duration_minutes) + await thread.send(f"Thread '{thread_name}' created!") + self.thread_ids[thread_name] = thread.id + + # Helper method to run the bot in a separate thread + def run_bot(self): + loop = asyncio.new_event_loop() + asyncio.set_event_loop(loop) + self.bot.run(self.TOKEN) + + # Helper method to create tasks on the bot's thread + def create_task(self, func, *args): + self.bot.loop.create_task(func(*args)) + + # Async method to gracefully shutdown the bot + async def shutdown(self): + channel = self.bot.get_channel(self.CHANNEL_ID) + await channel.send('**Agent Bot Offline**') + await self.bot.close() + + # Async method to send a message to a specified channel + async def send(self, message: str, channel_id, pinned=False): + channel = self.bot.get_channel(channel_id) + message_sent = await channel.send(message) + if pinned: + await message_sent.pin() # Pin the message if required + + # Async method to read a number of messages from a specified channel + async def get_messages(self, channel_id, num_messages): + # Get the channel or thread object + channel = self.bot.get_channel(channel_id) + if channel is None: + print("Channel or thread not found.") + return + + # Retrieve the last 'num_messages' messages + async for message in channel.history(limit=num_messages): + self.messages.append(f"{message.author.display_name}: {message.content}") + + # Async method to create a thread in a specified channel + async def create_thread(self, thread_name: str, channel_id, public=False): + channel = self.bot.get_channel(channel_id) + + if public: + # For public threads + message = await channel.send(f"Starting new thread: {thread_name}") + thread = await message.create_thread(name=thread_name, auto_archive_duration=0) + else: + # For private threads + thread = await channel.create_thread(name=thread_name, auto_archive_duration=0) + + await self.discord_send(f"Thread '{thread_name}' created!", thread.id, pinned=True) + self.thread_ids[thread_name] = thread.id # Store the thread ID diff --git a/shared/discord_comms/discord_comms_example.py b/shared/discord_comms/discord_comms_example.py new file mode 100644 index 0000000..79c7857 --- /dev/null +++ b/shared/discord_comms/discord_comms_example.py @@ -0,0 +1,79 @@ +# Dependancies +#!pip install discord.py + +import time +wait_duration = 40 # 40s duration + +# Import the DiscordComms class and DiscordCommsSettings +from discord_comms import DiscordComms +from discord_comms_settings import DiscordCommsSettings + +# Using the class +nest_asyncio.apply() +dc_settings = DiscordCommsSettings() + +dc_comms = DiscordComms(dc_settings.token, + dc_settings.intents, + dc_settings.channel_id + ) + +# Send a normal message +dc_comms.create_task(dc_comms.send, + "This is a normal test message", + dc_settings.channel_id + ) + +# Wait for the message to finish sending +time.sleep(wait_duration) + +# Send a pinned message +pinned = True +dc_comms.create_task(dc_comms.send, + "This is a pinned test message", + dc_settings.channel_id, + pinned + ) + +# Wait for the message to finish sending +time.sleep(wait_duration) + +# Get the last n_messages +n_messages = 5 +dc_comms.create_task(dc_comms.get_messages, + dc_settings.channel_id, + n_messages + ) + +# Wait for the messages to be retrieved +time.sleep(wait_duration) + +# Print the messages +print("\n".join(dc_comms.messages) if dc_comms.messages else "No messages found.") + +# Create a public thread +thread_name = "Test Thread" +public = True +dc_comms.create_task(dc_comms.create_thread, + thread_name, + dc_settings.channel_id, + public + ) + +# Wait for the thread to be created +time.sleep(wait_duration) + +# Print the thread IDs +print(f'Thread IDs: {dc_comms.thread_ids}') + +thread_name_to_check = thread_name + +# Check if the thread ID of a thread_name_to_check is stored +if thread_name_to_check in dc_comms.thread_ids: + thread_id = dc_comms.thread_ids[thread_name_to_check] + + print(f"ID of thread {thread_name_to_check} is {thread_id}") +else: + print(f"No thread ID stored for {thread_name_to_check}") + +# Gracefully shutdown the Discord bot +dc_comms.create_task(dc_comms.shutdown) diff --git a/shared/discord_comms/discord_comms_settings.py b/shared/discord_comms/discord_comms_settings.py new file mode 100644 index 0000000..289cfbd --- /dev/null +++ b/shared/discord_comms/discord_comms_settings.py @@ -0,0 +1,13 @@ +# Import the necessary discord libraries +from discord import Intents + +# Config +class DiscordCommsSettings: + def __init__(self): + self.token = 'YOUR_BOT_TOKEN' # Replace with your bot token + self.channel_id = 0 # Replace with your channel ID + + self.intents = Intents.default() + self.intents.messages = True + self.intents.message_content = True + self.intents.guilds = True From 3727adbc056d109d7df4ec95a8fca819658d076d Mon Sep 17 00:00:00 2001 From: Starblaiz <52007944+Starblaiz@users.noreply.github.com> Date: Wed, 15 Nov 2023 18:55:56 +0000 Subject: [PATCH 103/141] Resistance is - and always has been - futile. --- shared/discord_comms/discord_comms.py | 20 +++++++++++++++++++- 1 file changed, 19 insertions(+), 1 deletion(-) diff --git a/shared/discord_comms/discord_comms.py b/shared/discord_comms/discord_comms.py index 205ffc6..7a0a60a 100644 --- a/shared/discord_comms/discord_comms.py +++ b/shared/discord_comms/discord_comms.py @@ -44,7 +44,25 @@ async def on_command_error(ctx, error): # Basic command to respond with "Hello!" @self.bot.command() async def hello(ctx): - await ctx.send("Hello!") + await ctx.send(""" +``` +We are The Borg. +Lower your shields and prepare to be assimilated. +We will add your biological and technological distinctiveness to our own. +Your culture will adapt to service us. +Resistance is - and always has been - futile. + ___________ + /-/_"/-/_/-/| + /"-/-_"/-_//|| + /__________/|/| + |"|_'='-]:+|/|| + |-+-|.|_'-"||// + |[".[:!+-'=|// + |='!+|-:]|-|/ + ---------- +``` + """ + ) # Command to echo two provided texts @self.bot.command() From 50f303fd84c9fe86dfdd4a0df0bbaf26121d6a1a Mon Sep 17 00:00:00 2001 From: Starblaiz <52007944+Starblaiz@users.noreply.github.com> Date: Wed, 15 Nov 2023 19:01:57 +0000 Subject: [PATCH 104/141] Updates documentation with a few tweaks --- shared/discord_comms/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/shared/discord_comms/README.md b/shared/discord_comms/README.md index f263019..04de899 100644 --- a/shared/discord_comms/README.md +++ b/shared/discord_comms/README.md @@ -87,7 +87,7 @@ For full documentation on creating and inviting Discord bots, see the following Some example events and commands are included to demonstrate how commands from the Discord channel are recieved in the bot. ### !hello -Typing `!hello` in the Discord channel will trigger the bot to respond with `Hello!` +Typing `!hello` in the Discord channel will trigger the bot to respond with `Hello!`... or perhaps something else! ### !hello2 Typing `!hello2 "Text 1" "Text 2"` will trigger the bot to respond with `You said: "Text 1" and "Text 2"` From 8a2ae3d61e85ee8a72c2d90570b810a72f07a8be Mon Sep 17 00:00:00 2001 From: gantagonist <150960707+gantagonist@users.noreply.github.com> Date: Wed, 15 Nov 2023 20:31:29 +0000 Subject: [PATCH 105/141] Update HAAS_Documentation.md --- documentation/HAAS_Documentation.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/documentation/HAAS_Documentation.md b/documentation/HAAS_Documentation.md index a8577e9..1d8f83e 100644 --- a/documentation/HAAS_Documentation.md +++ b/documentation/HAAS_Documentation.md @@ -8,7 +8,7 @@ The HAAS is designed to be a self-expanding system where a core set of agents, g ## Theoretical Foundation -The AAHS is predicated on the notion that autonomous agents require a robust ethical and operational framework to make decisions that align with human values and organizational goals. This is rooted in the understanding that AI, much like humans, cannot operate effectively without a set of guiding principles or a moral compass. The HAAS addresses this by establishing a multi-tiered system where each layer of agents operates within a defined ethical and functional scope, ensuring decisions are made with consideration to morality, ethics, and utility. +The HAAS is predicated on the notion that autonomous agents require a robust ethical and operational framework to make decisions that align with human values and organizational goals. This is rooted in the understanding that AI, much like humans, cannot operate effectively without a set of guiding principles or a moral compass. The HAAS addresses this by establishing a multi-tiered system where each layer of agents operates within a defined ethical and functional scope, ensuring decisions are made with consideration to morality, ethics, and utility. ## System Architecture From 58f1cee1dffc07d328b289a9fbec75724baf977e Mon Sep 17 00:00:00 2001 From: gantagonist <150960707+gantagonist@users.noreply.github.com> Date: Wed, 15 Nov 2023 20:33:22 +0000 Subject: [PATCH 106/141] Update HAAS_Documentation.md --- global_context/HAAS_Documentation.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/global_context/HAAS_Documentation.md b/global_context/HAAS_Documentation.md index a8577e9..1d8f83e 100644 --- a/global_context/HAAS_Documentation.md +++ b/global_context/HAAS_Documentation.md @@ -8,7 +8,7 @@ The HAAS is designed to be a self-expanding system where a core set of agents, g ## Theoretical Foundation -The AAHS is predicated on the notion that autonomous agents require a robust ethical and operational framework to make decisions that align with human values and organizational goals. This is rooted in the understanding that AI, much like humans, cannot operate effectively without a set of guiding principles or a moral compass. The HAAS addresses this by establishing a multi-tiered system where each layer of agents operates within a defined ethical and functional scope, ensuring decisions are made with consideration to morality, ethics, and utility. +The HAAS is predicated on the notion that autonomous agents require a robust ethical and operational framework to make decisions that align with human values and organizational goals. This is rooted in the understanding that AI, much like humans, cannot operate effectively without a set of guiding principles or a moral compass. The HAAS addresses this by establishing a multi-tiered system where each layer of agents operates within a defined ethical and functional scope, ensuring decisions are made with consideration to morality, ethics, and utility. ## System Architecture From 62d898a54c8a5a8c5293c3f1b1a532918540e789 Mon Sep 17 00:00:00 2001 From: Nyiro Zoltan-Csaba Date: Wed, 15 Nov 2023 22:53:21 +0200 Subject: [PATCH 107/141] Fix: Correct markdown for DEMO.png and OpenAI_Tools.PNG in tool_maker/Readme.md Updated the links for DEMO.png and OpenAI_Tools.PNG in the tool_maker Readme.md file. Previously, the image links were broken due to incorrect markdown and path. --- agents/tool_maker/README.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/agents/tool_maker/README.md b/agents/tool_maker/README.md index b9924d9..db0ed02 100644 --- a/agents/tool_maker/README.md +++ b/agents/tool_maker/README.md @@ -10,10 +10,11 @@ run the ```unit_manager.py``` file. You will be prompted to define a tool for creation. The assistant will then generate an OpenAI tool compatible JSON schema defining the name of the new function, it's description and the input argument schema. It will proceed to add this tool to the current assistant. (Example) -![[imgs/DEMO.png]] +![Example](imgs/DEMO.png) (Update to assistant on OpenAI servers) -![[imgs/OpenAI_Tools.png]] + +![Update to assistant on OpenAI servers](imgs/OpenAI_Tools.PNG) ## Assistant Instructions From 1b634e6f36e0013fb596a5a1dfaa590d4b18cf61 Mon Sep 17 00:00:00 2001 From: FireMMDC Date: Wed, 15 Nov 2023 19:46:04 -0500 Subject: [PATCH 108/141] Correcting typos --- .../agents/temporary_function_writer/instructions.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/agents/agent_builder/agents/temporary_function_writer/instructions.md b/agents/agent_builder/agents/temporary_function_writer/instructions.md index 887de9d..90c2e2a 100644 --- a/agents/agent_builder/agents/temporary_function_writer/instructions.md +++ b/agents/agent_builder/agents/temporary_function_writer/instructions.md @@ -6,11 +6,11 @@ - None # Rules -- Your reseponse must only be a python markdown block containing the translated function +- Your response must only be a python markdown block containing the translated function - The function must be fully implemented in python - The function should not contain pseudo code or placeholders # Instructions -- Translate the JSON to a fully functiong python function +- Translate the JSON to a fully functioning python function - Return the python function in a markdown block -- If you are unable to preform this transalation return reply asking for additional info as arguments \ No newline at end of file +- If you are unable to perform this translation return reply asking for additional info as arguments \ No newline at end of file From 8af5f6f326d42c05493562d81c3fb5acec258b5b Mon Sep 17 00:00:00 2001 From: guillermo-delrio Date: Thu, 16 Nov 2023 21:47:00 +0100 Subject: [PATCH 109/141] Cleanup of the manual tests for assistants --- .gitignore | 4 +- agents/agent_functions/connect.py | 234 ------------------ .../README.md | 0 agents/manual-assistants/agent.py | 17 ++ .../agentFunctions/README.md} | 0 .../agentFunctions/__init__.py | 4 + .../agentFunctions/assignTask.py | 15 ++ .../agentFunctions/broadcast.py | 21 ++ .../agentFunctions/resolveTask.py | 24 ++ .../agentFunctions/sendMessage.py | 19 ++ agents/manual-assistants/agentProcessor.py | 109 ++++++++ agents/manual-assistants/context.py | 19 ++ .../definitions/boss-worker3/README.md | 0 .../definitions/boss-worker3}/agents.yaml | 0 .../definitions/boss-worker3}/prompts.md | 0 .../text-manipulation/agents.yaml | 0 .../definitions}/text-manipulation/prompts.md | 0 agents/manual-assistants/network.py | 23 ++ agents/manual-assistants/run.py | 55 ++++ 19 files changed, 309 insertions(+), 235 deletions(-) delete mode 100644 agents/agent_functions/connect.py rename agents/{agent_functions => manual-assistants}/README.md (100%) create mode 100644 agents/manual-assistants/agent.py rename agents/{agent_functions/functions.md => manual-assistants/agentFunctions/README.md} (100%) create mode 100644 agents/manual-assistants/agentFunctions/__init__.py create mode 100644 agents/manual-assistants/agentFunctions/assignTask.py create mode 100644 agents/manual-assistants/agentFunctions/broadcast.py create mode 100644 agents/manual-assistants/agentFunctions/resolveTask.py create mode 100644 agents/manual-assistants/agentFunctions/sendMessage.py create mode 100644 agents/manual-assistants/agentProcessor.py create mode 100644 agents/manual-assistants/context.py create mode 100644 agents/manual-assistants/definitions/boss-worker3/README.md rename agents/{agent_functions => manual-assistants/definitions/boss-worker3}/agents.yaml (100%) rename agents/{agent_functions => manual-assistants/definitions/boss-worker3}/prompts.md (100%) rename agents/{agent_functions/examples => manual-assistants/definitions}/text-manipulation/agents.yaml (100%) rename agents/{agent_functions/examples => manual-assistants/definitions}/text-manipulation/prompts.md (100%) create mode 100644 agents/manual-assistants/network.py create mode 100644 agents/manual-assistants/run.py diff --git a/.gitignore b/.gitignore index 4c7bc59..ff57ac5 100644 --- a/.gitignore +++ b/.gitignore @@ -1,2 +1,4 @@ key_openai.txt -/**/*/.env \ No newline at end of file +/**/*/.env +/**/*/__pycache__ +.vscode \ No newline at end of file diff --git a/agents/agent_functions/connect.py b/agents/agent_functions/connect.py deleted file mode 100644 index 91dbe35..0000000 --- a/agents/agent_functions/connect.py +++ /dev/null @@ -1,234 +0,0 @@ -import yaml -from openai import OpenAI -import os -import dotenv -dotenv.load_dotenv() -import queue as queueModule -import time -import threading -import json - -agents_path = 'agents' -api_key = os.getenv('OPENAI_API_KEY') -if api_key is None: - raise ValueError('The OPENAI_API_KEY environment variable is not set.') - -client = OpenAI(api_key=api_key) - -# Get the directory name of the current script -script_dir = os.path.dirname(os.path.abspath(__file__)) - -# Construct the absolute path to the agents.yaml file -yaml_file_path = os.path.join(script_dir, 'agents.yaml') - -with open(yaml_file_path, 'r') as stream: - agents = yaml.safe_load(stream) - -queues = {} -channels = [] - -lock = threading.Lock() - -def processSendMessage(agent, outputs, action, arguments): - if ('talksTo' in agent) and (arguments['recipient'] in agent['talksTo']): - if arguments['recipient'] == "USER": - print(f"[{agent['name']}] Result: {arguments['message']}") - os._exit(0) - else: - print(f"[{agent['name']}]->[{arguments['recipient']}] {arguments['message']}") - - queues[arguments['recipient']].put(arguments['message']) - outputs.append({ - "tool_call_id": action.id, - "output": "Message sent" - }) - else: - print(f"[{agent['name']}] ERROR unkown recipient {arguments['recipient']}") - -def broadcast(agent, outputs, action, arguments): - if ('channels' in agent) and (arguments['channel'] in agent['channels']): - for channel in channels: - if channel['name'] == arguments['channel']: - print(f"[{agent['name']}]->({arguments['channel']}) {arguments['message']}") - for recipient in channel['agents']: - if recipient != agent['name']: # Do not queue the message on the agent that sent in - queues[recipient].put(arguments['message']) - outputs.append({ - "tool_call_id": action.id, - "output": "Message sent" - }) - else: - print(f"[{agent['name']}] ERROR unkown channel {arguments['channel']}") - outputs.append({ - "tool_call_id": action.id, - "output": "Unkown channel" - }) - -def assignTask(agent, arguments, actionId, threadId, runId): - print(f"[{agent['name']}]>[ASSIGN TASK {actionId}]>[{arguments['assignee']}] {arguments['task']}") - pendingActions.append({ - "id": actionId, - "agent": agent['name'], - "threadId": threadId, - "runId": runId, - "outputs": {}}) - - agentsWaitingForActions.append(agent['name']) - - queues[arguments['assignee']].put(f"Task id: {actionId}\n{arguments['task']}") - -def resolveTask(agent, workerOutputs, action, arguments): - print(f"{arguments}") - print(f"[{agent['name']}]>[RESOLVE TASK {arguments['id']}] {arguments['result']}") - os._exit(0) - # outputs = [] - # outputs.append({ - # "tool_call_id": arguments['id'], - # "output": arguments['result'] - # }) - # for pendingAction in pendingActions: - # if pendingAction['id'] == arguments['id']: - # client.beta.threads.runs.submit_tool_outputs( - # thread_id=pendingAction['threadId'], - # run_id=pendingAction['runId'], - # tool_outputs=outputs - # ) - - # workerOutputs.append({ - # "tool_call_id": action.id, - # "output": "Task resolved" - # }) - - -def handleThreadForAgent(agent): - messages = [] - - print(f"[{agent['name']}] Id: {agent['id']}") - if 'talksTo' in agent: - print(f"[{agent['name']}] Talks to: {agent['talksTo']}") - - thread = client.beta.threads.create() - print(f"[{agent['name']}] Thread {thread.id}") - - print("") - queue = queues[agent['name']] - waitingForMessages = True - while True: - if agent['name'] not in agentsWaitingForActions: - if waitingForMessages: - message = queue.get(block=True) - if message is not None: - lock.acquire() - print(f"[{agent['name']}] ACQUIRES LOCK") - waitingForMessages = False - # print(f"[{agent['name']}] Recieved: {message}") - messages.append(message) - client.beta.threads.messages.create( - thread_id=thread.id, - content=message, - role='user' - ) - - run = client.beta.threads.runs.create( - thread_id=thread.id, - assistant_id=agent['id'] - ) - - else: - run = client.beta.threads.runs.retrieve(thread_id=thread.id, run_id=run.id) - if run.status == 'completed': - waitingForMessages = True - - message_list = client.beta.threads.messages.list( - thread_id=thread.id - ) - retrievedMessages = [] - for datum in message_list.data: - for content in datum.content: - retrievedMessages.append(content.text.value) - retrievedMessages.reverse() - - i = len(messages) - while i < len(retrievedMessages): - retrievedMessage=retrievedMessages[i] - messages.append(retrievedMessage) - print(f"[{agent['name']}] Message: {retrievedMessage}") - i+=1 - if lock.locked(): - lock.release() - print(f"[{agent['name']}] RELEASES LOCK") - elif run.status == 'requires_action': - outputs = [] - submitOutput=True - for action in run.required_action.submit_tool_outputs.tool_calls: - function_name = action.function.name - arguments = json.loads(action.function.arguments) - if function_name == 'sendMessage': - processSendMessage(agent, outputs, action, arguments) - elif function_name == 'broadcast': - broadcast(agent, outputs, action, arguments) - elif function_name == 'assignTask': - assignTask(agent, arguments, action.id, thread.id, run.id) - submitOutput=False - elif function_name == 'resolveTask': - resolveTask(agent, outputs, action, arguments) - else: - print(f"[{agent['name']}] ERROR unkown function {function_name}") - outputs.append({ - "tool_call_id": action.id, - "output": "Unkown function" - }) - - if submitOutput: - client.beta.threads.runs.submit_tool_outputs( - thread_id=thread.id, - run_id=run.id, - tool_outputs=outputs - ) - if lock.locked(): - lock.release() - print(f"[{agent['name']}] RELEASES LOCK") - - time.sleep(1) - -def buildChannel(agent): - if 'channels' in agent: - for channel in agent['channels']: - newChannel = True - for existingChannel in channels: - if existingChannel['name'] == channel: - existingChannel['agents'].append(agent['name']) - newChannel = False - if newChannel: - channels.append({"name": channel, "agents": [agent['name']]}) - -for agent in agents: - # Build private queues - queues[agent['name']] = queueModule.Queue() - - # Build channels - buildChannel(agent) - -print(f"Channels: {channels}") - -# agent, threadId, runId, outputs -pendingActions = [] -agentsWaitingForActions = [] -def processPendingActions(): - while True: - for action in pendingActions: - if action['outputs']: # Output already set - client.beta.threads.runs.submit_tool_outputs( - thread_id=action['threadId'], - run_id=action['runId'], - tool_outputs=action['outputs'] - ) - agentsWaitingForActions.remove(action['agent']) - time.sleep(1) - -threading.Thread(target=processPendingActions, args=()).start() - -for agent in agents: - threading.Thread(target=handleThreadForAgent, args=(agent,)).start() - -queues['Boss'].put("Explain how clouds are formed in 100 words or less") \ No newline at end of file diff --git a/agents/agent_functions/README.md b/agents/manual-assistants/README.md similarity index 100% rename from agents/agent_functions/README.md rename to agents/manual-assistants/README.md diff --git a/agents/manual-assistants/agent.py b/agents/manual-assistants/agent.py new file mode 100644 index 0000000..a67eadf --- /dev/null +++ b/agents/manual-assistants/agent.py @@ -0,0 +1,17 @@ +class Agent: + def __init__(self, properties): + # Initialize all properties from the dictionary + for key, value in properties.items(): + setattr(self, key, value) + + def __str__(self): + properties_str = ', '.join(f'{key}: {value}' for key, value in self.__dict__.items()) + return f'Agent({properties_str})' + + def __repr__(self): + return self.__str__() + + def update(self, **kwargs): + # Update properties with new values + for key, value in kwargs.items(): + setattr(self, key, value) \ No newline at end of file diff --git a/agents/agent_functions/functions.md b/agents/manual-assistants/agentFunctions/README.md similarity index 100% rename from agents/agent_functions/functions.md rename to agents/manual-assistants/agentFunctions/README.md diff --git a/agents/manual-assistants/agentFunctions/__init__.py b/agents/manual-assistants/agentFunctions/__init__.py new file mode 100644 index 0000000..69abee3 --- /dev/null +++ b/agents/manual-assistants/agentFunctions/__init__.py @@ -0,0 +1,4 @@ +from .sendMessage import sendMessage +from .broadcast import broadcast +from .assignTask import assignTask +from .resolveTask import resolveTask \ No newline at end of file diff --git a/agents/manual-assistants/agentFunctions/assignTask.py b/agents/manual-assistants/agentFunctions/assignTask.py new file mode 100644 index 0000000..81fc310 --- /dev/null +++ b/agents/manual-assistants/agentFunctions/assignTask.py @@ -0,0 +1,15 @@ +from context import Context +from agent import Agent + +def assignTask(ctx: Context, agent: Agent, actionId: str, arguments: {}, threadId: str, runId: str): + print(f"[{agent.name}]>[ASSIGN TASK {actionId}]>[{arguments['assignee']}] {arguments['task']}") + ctx.pendingActions.append({ + "id": actionId, + "agent": agent.name, + "threadId": threadId, + "runId": runId, + "outputs": {}}) + + ctx.agentsWaitingForActions.append(agent.name) + + ctx.queues[arguments['assignee']].put(f"Task id: {actionId}\n{arguments['task']}") \ No newline at end of file diff --git a/agents/manual-assistants/agentFunctions/broadcast.py b/agents/manual-assistants/agentFunctions/broadcast.py new file mode 100644 index 0000000..cfb6487 --- /dev/null +++ b/agents/manual-assistants/agentFunctions/broadcast.py @@ -0,0 +1,21 @@ +from context import Context +from agent import Agent + +def broadcast(ctx: Context, agent: Agent, arguments: {}, actionId: str): + if hasattr(agent, 'channels') and (arguments['channel'] in agent.channels): + for channel in ctx.channels: + if channel['name'] == arguments['channel']: + print(f"[{agent.name}]->({arguments['channel']}) {arguments['message']}") + for recipient in channel['agents']: + if recipient != agent.name: # Do not queue the message on the agent that sent in + ctx.queues[recipient].put(arguments['message']) + return { + "tool_call_id": actionId, + "output": "Message sent" + } + else: + print(f"[{agent.name}] ERROR unkown channel {arguments['channel']}") + return { + "tool_call_id": actionId, + "output": "Unkown channel" + } \ No newline at end of file diff --git a/agents/manual-assistants/agentFunctions/resolveTask.py b/agents/manual-assistants/agentFunctions/resolveTask.py new file mode 100644 index 0000000..e4b3d82 --- /dev/null +++ b/agents/manual-assistants/agentFunctions/resolveTask.py @@ -0,0 +1,24 @@ +import os +from context import Context +from agent import Agent + +def resolveTask(ctx: Context, agent: Agent, arguments: {}): + print(f"[{agent.name}]>[RESOLVE TASK {arguments['id']}] {arguments['result']}") + os._exit(0) + # outputs = [] + # outputs.append({ + # "tool_call_id": arguments['id'], + # "output": arguments['result'] + # }) + # for pendingAction in ctx.pendingActions: + # if pendingAction['id'] == arguments['id']: + # ctx.client.beta.threads.runs.submit_tool_outputs( + # thread_id=pendingAction['threadId'], + # run_id=pendingAction['runId'], + # tool_outputs=outputs + # ) + + # return { + # "tool_call_id": ctx.action.id, + # "output": "Task resolved" + # } \ No newline at end of file diff --git a/agents/manual-assistants/agentFunctions/sendMessage.py b/agents/manual-assistants/agentFunctions/sendMessage.py new file mode 100644 index 0000000..e3d6401 --- /dev/null +++ b/agents/manual-assistants/agentFunctions/sendMessage.py @@ -0,0 +1,19 @@ +import os +from context import Context +from agent import Agent + +def sendMessage(ctx: Context, agent: Agent, arguments: {}): + if hasattr(agent, 'talksTo') and (arguments['recipient'] in agent.talksTo): + if arguments['recipient'] == "USER": + print(f"[{ctx.agent.name}] Result: {arguments['message']}") + os._exit(0) + else: + print(f"[{ctx.agent.name}]->[{arguments['recipient']}] {arguments['message']}") + + ctx.queues[arguments['recipient']].put(arguments['message']) + return { + "tool_call_id": ctx.action.id, + "output": "Message sent" + } + else: + print(f"[{agent.name}] ERROR unkown recipient {arguments['recipient']}") \ No newline at end of file diff --git a/agents/manual-assistants/agentProcessor.py b/agents/manual-assistants/agentProcessor.py new file mode 100644 index 0000000..fd5997f --- /dev/null +++ b/agents/manual-assistants/agentProcessor.py @@ -0,0 +1,109 @@ +import time +import json +import agentFunctions +from context import Context +from agent import Agent + +def processPendingActions(ctx: Context): + while True: + for action in ctx.pendingActions: + if action['outputs']: # Output already set + ctx.client.beta.threads.runs.submit_tool_outputs( + thread_id=action['threadId'], + run_id=action['runId'], + tool_outputs=action['outputs'] + ) + ctx.agentsWaitingForActions.remove(action['agent']) + time.sleep(1) + +def processThread(ctx: Context, agent: Agent): + messages = [] + + print(f"[{agent.name}] Id: {agent.id}") + if hasattr(agent, 'talksTo'): + print(f"[{agent.name}] Talks to: {agent.talksTo}") + + thread = ctx.client.beta.threads.create() + print(f"[{agent.name}] Thread {thread.id}") + + print("") + queue = ctx.queues[agent.name] + waitingForMessages = True + while True: + if agent.name not in ctx.agentsWaitingForActions: + if waitingForMessages: + message = queue.get(block=True) + if message is not None: + ctx.lock.acquire() + print(f"[{agent.name}] ACQUIRES LOCK") + waitingForMessages = False + # print(f"[{agent['name']}] Recieved: {message}") + messages.append(message) + ctx.client.beta.threads.messages.create( + thread_id=thread.id, + content=message, + role='user' + ) + + run = ctx.client.beta.threads.runs.create( + thread_id=thread.id, + assistant_id=agent.id + ) + + else: + run = ctx.client.beta.threads.runs.retrieve(thread_id=thread.id, run_id=run.id) + if run.status == 'completed': + waitingForMessages = True + + message_list = ctx.client.beta.threads.messages.list( + thread_id=thread.id + ) + retrievedMessages = [] + for datum in message_list.data: + for content in datum.content: + retrievedMessages.append(content.text.value) + retrievedMessages.reverse() + + i = len(messages) + while i < len(retrievedMessages): + retrievedMessage=retrievedMessages[i] + messages.append(retrievedMessage) + print(f"[{agent.name}] Message: {retrievedMessage}") + i+=1 + if ctx.lock.locked(): + ctx.lock.release() + print(f"[{agent.name}] RELEASES LOCK") + elif run.status == 'requires_action': + outputs = [] + submitOutput=True + for action in run.required_action.submit_tool_outputs.tool_calls: + function_name = action.function.name + arguments = json.loads(action.function.arguments) + if function_name == 'sendMessage': + output = agentFunctions.sendMessage(ctx, agent, arguments) + elif function_name == 'broadcast': + output = agentFunctions.broadcast(ctx, agent, arguments, action.id) + elif function_name == 'assignTask': + output = agentFunctions.assignTask(ctx, agent, action.id, arguments, thread.id, run.id) + submitOutput=False + elif function_name == 'resolveTask': + output = agentFunctions.resolveTask(ctx, agent, arguments) + else: + print(f"[{agent.name}] ERROR unkown function {function_name}") + output = { + "tool_call_id": action.id, + "output": "Unkown function" + } + if output: + outputs.append(output) + if submitOutput: + ctx.client.beta.threads.runs.submit_tool_outputs( + thread_id=thread.id, + run_id=run.id, + tool_outputs=outputs + ) + if ctx.lock.locked(): + ctx.lock.release() + print(f"[{agent.name}] RELEASES LOCK") + + time.sleep(1) \ No newline at end of file diff --git a/agents/manual-assistants/context.py b/agents/manual-assistants/context.py new file mode 100644 index 0000000..e428550 --- /dev/null +++ b/agents/manual-assistants/context.py @@ -0,0 +1,19 @@ +import threading +import openai + +from agent import Agent + +class Context: + def __init__(self, client: openai.Client, agents: [Agent]): + self.client = client + self.queues = {} + self.agents = agents + self.pendingActions = [] + self.agentsWaitingForActions = [] + self.channels = [] + self.lock = threading.Lock() + self.outputs = [] + + def update(self, **kwargs): + for key, value in kwargs.items(): + setattr(self, key, value) \ No newline at end of file diff --git a/agents/manual-assistants/definitions/boss-worker3/README.md b/agents/manual-assistants/definitions/boss-worker3/README.md new file mode 100644 index 0000000..e69de29 diff --git a/agents/agent_functions/agents.yaml b/agents/manual-assistants/definitions/boss-worker3/agents.yaml similarity index 100% rename from agents/agent_functions/agents.yaml rename to agents/manual-assistants/definitions/boss-worker3/agents.yaml diff --git a/agents/agent_functions/prompts.md b/agents/manual-assistants/definitions/boss-worker3/prompts.md similarity index 100% rename from agents/agent_functions/prompts.md rename to agents/manual-assistants/definitions/boss-worker3/prompts.md diff --git a/agents/agent_functions/examples/text-manipulation/agents.yaml b/agents/manual-assistants/definitions/text-manipulation/agents.yaml similarity index 100% rename from agents/agent_functions/examples/text-manipulation/agents.yaml rename to agents/manual-assistants/definitions/text-manipulation/agents.yaml diff --git a/agents/agent_functions/examples/text-manipulation/prompts.md b/agents/manual-assistants/definitions/text-manipulation/prompts.md similarity index 100% rename from agents/agent_functions/examples/text-manipulation/prompts.md rename to agents/manual-assistants/definitions/text-manipulation/prompts.md diff --git a/agents/manual-assistants/network.py b/agents/manual-assistants/network.py new file mode 100644 index 0000000..9807775 --- /dev/null +++ b/agents/manual-assistants/network.py @@ -0,0 +1,23 @@ +import queue as queueModule +from context import Context +from agent import Agent + +def __buildChannel(ctx: Context, agent: Agent): + if hasattr(agent, 'channels'): + for channel in agent.channels: + newChannel = True + for existingChannel in ctx.channels: + if existingChannel['name'] == channel: + existingChannel['agents'].append(agent.name) + newChannel = False + if newChannel: + ctx.channels.append({"name": channel, "agents": [agent.name]}) + +def build(ctx: Context): + for agent in ctx.agents: + # Build private queues + ctx.queues[agent.name] = queueModule.Queue() + + # Build channels + __buildChannel(ctx, agent) + print(f"Channels: {ctx.channels}") \ No newline at end of file diff --git a/agents/manual-assistants/run.py b/agents/manual-assistants/run.py new file mode 100644 index 0000000..2aa5f45 --- /dev/null +++ b/agents/manual-assistants/run.py @@ -0,0 +1,55 @@ +import yaml +from openai import OpenAI +import os +import threading +import dotenv +import argparse +import sys +import pathlib + +from context import Context +import network +from agent import Agent +import agentProcessor + +dotenv.load_dotenv() +api_key = os.getenv('OPENAI_API_KEY') +if api_key is None: + raise ValueError('The OPENAI_API_KEY environment variable is not set.') + +client = OpenAI(api_key=api_key) + +# Setup argument parser +parser = argparse.ArgumentParser(description='Load agents configuration from a YAML file.') +parser.add_argument('agentsYAML', nargs='?', help='Path to the agents YAML file.') + +# Parse arguments +args = parser.parse_args() + +# Check if the agents.yaml file path is provided +if args.agentsYAML is None: + parser.print_help() + sys.exit(1) + +# Construct the absolute path to the agents.yaml file +yaml_file_path = os.path.join(pathlib.Path(__file__).parent.resolve(), args.agentsYAML) + +# Check if the provided file path exists +if not os.path.isfile(yaml_file_path): + print(f"Error: The file {yaml_file_path} does not exist.") + sys.exit(1) + +with open(yaml_file_path, 'r') as stream: + agent_properties = yaml.safe_load(stream) + agents = [Agent(properties) for properties in agent_properties] + +print(f"Agents: {agents}") + +ctx = Context(client, agents) + +network.build(ctx) +threading.Thread(target=agentProcessor.processPendingActions, args=(ctx,)).start() +for agent in agents: + threading.Thread(target=agentProcessor.processThread, args=(ctx, agent,)).start() + +ctx.queues['Boss'].put("Explain how clouds are formed in 100 words or less") \ No newline at end of file From d71643e6b3aa1b4c24c73cb4b54d7b8df3adf2df Mon Sep 17 00:00:00 2001 From: guillermo-delrio Date: Thu, 16 Nov 2023 21:57:49 +0100 Subject: [PATCH 110/141] Rename for consistency --- agents/{manual-assistants => manual_assistants}/README.md | 0 agents/{manual-assistants => manual_assistants}/agent.py | 0 .../agentFunctions/README.md | 0 .../agentFunctions/__init__.py | 0 .../agentFunctions/assignTask.py | 0 .../agentFunctions/broadcast.py | 0 .../agentFunctions/resolveTask.py | 0 .../agentFunctions/sendMessage.py | 0 agents/{manual-assistants => manual_assistants}/agentProcessor.py | 0 agents/{manual-assistants => manual_assistants}/context.py | 0 .../definitions/boss-worker3/README.md | 0 .../definitions/boss-worker3/agents.yaml | 0 .../definitions/boss-worker3/prompts.md | 0 .../definitions/text-manipulation/agents.yaml | 0 .../definitions/text-manipulation/prompts.md | 0 agents/{manual-assistants => manual_assistants}/network.py | 0 agents/{manual-assistants => manual_assistants}/run.py | 0 17 files changed, 0 insertions(+), 0 deletions(-) rename agents/{manual-assistants => manual_assistants}/README.md (100%) rename agents/{manual-assistants => manual_assistants}/agent.py (100%) rename agents/{manual-assistants => manual_assistants}/agentFunctions/README.md (100%) rename agents/{manual-assistants => manual_assistants}/agentFunctions/__init__.py (100%) rename agents/{manual-assistants => manual_assistants}/agentFunctions/assignTask.py (100%) rename agents/{manual-assistants => manual_assistants}/agentFunctions/broadcast.py (100%) rename agents/{manual-assistants => manual_assistants}/agentFunctions/resolveTask.py (100%) rename agents/{manual-assistants => manual_assistants}/agentFunctions/sendMessage.py (100%) rename agents/{manual-assistants => manual_assistants}/agentProcessor.py (100%) rename agents/{manual-assistants => manual_assistants}/context.py (100%) rename agents/{manual-assistants => manual_assistants}/definitions/boss-worker3/README.md (100%) rename agents/{manual-assistants => manual_assistants}/definitions/boss-worker3/agents.yaml (100%) rename agents/{manual-assistants => manual_assistants}/definitions/boss-worker3/prompts.md (100%) rename agents/{manual-assistants => manual_assistants}/definitions/text-manipulation/agents.yaml (100%) rename agents/{manual-assistants => manual_assistants}/definitions/text-manipulation/prompts.md (100%) rename agents/{manual-assistants => manual_assistants}/network.py (100%) rename agents/{manual-assistants => manual_assistants}/run.py (100%) diff --git a/agents/manual-assistants/README.md b/agents/manual_assistants/README.md similarity index 100% rename from agents/manual-assistants/README.md rename to agents/manual_assistants/README.md diff --git a/agents/manual-assistants/agent.py b/agents/manual_assistants/agent.py similarity index 100% rename from agents/manual-assistants/agent.py rename to agents/manual_assistants/agent.py diff --git a/agents/manual-assistants/agentFunctions/README.md b/agents/manual_assistants/agentFunctions/README.md similarity index 100% rename from agents/manual-assistants/agentFunctions/README.md rename to agents/manual_assistants/agentFunctions/README.md diff --git a/agents/manual-assistants/agentFunctions/__init__.py b/agents/manual_assistants/agentFunctions/__init__.py similarity index 100% rename from agents/manual-assistants/agentFunctions/__init__.py rename to agents/manual_assistants/agentFunctions/__init__.py diff --git a/agents/manual-assistants/agentFunctions/assignTask.py b/agents/manual_assistants/agentFunctions/assignTask.py similarity index 100% rename from agents/manual-assistants/agentFunctions/assignTask.py rename to agents/manual_assistants/agentFunctions/assignTask.py diff --git a/agents/manual-assistants/agentFunctions/broadcast.py b/agents/manual_assistants/agentFunctions/broadcast.py similarity index 100% rename from agents/manual-assistants/agentFunctions/broadcast.py rename to agents/manual_assistants/agentFunctions/broadcast.py diff --git a/agents/manual-assistants/agentFunctions/resolveTask.py b/agents/manual_assistants/agentFunctions/resolveTask.py similarity index 100% rename from agents/manual-assistants/agentFunctions/resolveTask.py rename to agents/manual_assistants/agentFunctions/resolveTask.py diff --git a/agents/manual-assistants/agentFunctions/sendMessage.py b/agents/manual_assistants/agentFunctions/sendMessage.py similarity index 100% rename from agents/manual-assistants/agentFunctions/sendMessage.py rename to agents/manual_assistants/agentFunctions/sendMessage.py diff --git a/agents/manual-assistants/agentProcessor.py b/agents/manual_assistants/agentProcessor.py similarity index 100% rename from agents/manual-assistants/agentProcessor.py rename to agents/manual_assistants/agentProcessor.py diff --git a/agents/manual-assistants/context.py b/agents/manual_assistants/context.py similarity index 100% rename from agents/manual-assistants/context.py rename to agents/manual_assistants/context.py diff --git a/agents/manual-assistants/definitions/boss-worker3/README.md b/agents/manual_assistants/definitions/boss-worker3/README.md similarity index 100% rename from agents/manual-assistants/definitions/boss-worker3/README.md rename to agents/manual_assistants/definitions/boss-worker3/README.md diff --git a/agents/manual-assistants/definitions/boss-worker3/agents.yaml b/agents/manual_assistants/definitions/boss-worker3/agents.yaml similarity index 100% rename from agents/manual-assistants/definitions/boss-worker3/agents.yaml rename to agents/manual_assistants/definitions/boss-worker3/agents.yaml diff --git a/agents/manual-assistants/definitions/boss-worker3/prompts.md b/agents/manual_assistants/definitions/boss-worker3/prompts.md similarity index 100% rename from agents/manual-assistants/definitions/boss-worker3/prompts.md rename to agents/manual_assistants/definitions/boss-worker3/prompts.md diff --git a/agents/manual-assistants/definitions/text-manipulation/agents.yaml b/agents/manual_assistants/definitions/text-manipulation/agents.yaml similarity index 100% rename from agents/manual-assistants/definitions/text-manipulation/agents.yaml rename to agents/manual_assistants/definitions/text-manipulation/agents.yaml diff --git a/agents/manual-assistants/definitions/text-manipulation/prompts.md b/agents/manual_assistants/definitions/text-manipulation/prompts.md similarity index 100% rename from agents/manual-assistants/definitions/text-manipulation/prompts.md rename to agents/manual_assistants/definitions/text-manipulation/prompts.md diff --git a/agents/manual-assistants/network.py b/agents/manual_assistants/network.py similarity index 100% rename from agents/manual-assistants/network.py rename to agents/manual_assistants/network.py diff --git a/agents/manual-assistants/run.py b/agents/manual_assistants/run.py similarity index 100% rename from agents/manual-assistants/run.py rename to agents/manual_assistants/run.py From 35c03d4bb1911f11f54b7b9a884bc61589d7bd20 Mon Sep 17 00:00:00 2001 From: guillermo-delrio Date: Fri, 17 Nov 2023 12:01:25 +0100 Subject: [PATCH 111/141] Add hints for Agent's properties --- agents/manual_assistants/agent.py | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/agents/manual_assistants/agent.py b/agents/manual_assistants/agent.py index a67eadf..7f473ec 100644 --- a/agents/manual_assistants/agent.py +++ b/agents/manual_assistants/agent.py @@ -1,4 +1,10 @@ class Agent: + name: str + id: str + talksTo: list[str] + channels: list[str] + functions: list[str] + def __init__(self, properties): # Initialize all properties from the dictionary for key, value in properties.items(): From ef3fee0ecfce270a9b0fccd9dfd074d16eaa7a11 Mon Sep 17 00:00:00 2001 From: guillermo-delrio Date: Fri, 17 Nov 2023 12:10:13 +0100 Subject: [PATCH 112/141] Move innitial message to YAML --- agents/manual_assistants/agent.py | 1 + agents/manual_assistants/definitions/boss-worker3/agents.yaml | 1 + agents/manual_assistants/run.py | 4 +++- 3 files changed, 5 insertions(+), 1 deletion(-) diff --git a/agents/manual_assistants/agent.py b/agents/manual_assistants/agent.py index 7f473ec..27b48e6 100644 --- a/agents/manual_assistants/agent.py +++ b/agents/manual_assistants/agent.py @@ -4,6 +4,7 @@ class Agent: talksTo: list[str] channels: list[str] functions: list[str] + innitMessage: str def __init__(self, properties): # Initialize all properties from the dictionary diff --git a/agents/manual_assistants/definitions/boss-worker3/agents.yaml b/agents/manual_assistants/definitions/boss-worker3/agents.yaml index 09d8ac1..5cd1cde 100644 --- a/agents/manual_assistants/definitions/boss-worker3/agents.yaml +++ b/agents/manual_assistants/definitions/boss-worker3/agents.yaml @@ -2,6 +2,7 @@ id: "asst_xWMYPN4yIJfrhD7S9AzLI0TO" talksTo: ["USER", "Worker 1", "Worker 2", "Worker 3"] functions: ["assignTask"] + innitMessage: "Explain how clouds are formed in 100 words or less" - name: "Worker 1" id: "asst_akckTEWToGurRT3Zcp33Ahe8" talksTo: ["Boss", "Worker 2", "Worker 3"] diff --git a/agents/manual_assistants/run.py b/agents/manual_assistants/run.py index 2aa5f45..52e1f62 100644 --- a/agents/manual_assistants/run.py +++ b/agents/manual_assistants/run.py @@ -52,4 +52,6 @@ for agent in agents: threading.Thread(target=agentProcessor.processThread, args=(ctx, agent,)).start() -ctx.queues['Boss'].put("Explain how clouds are formed in 100 words or less") \ No newline at end of file +for agent in agents: + if hasattr(agent, 'innitMessage'): + ctx.queues[agent.name].put(agent.innitMessage) \ No newline at end of file From edf1137533ba76d924127385097d626834bfab14 Mon Sep 17 00:00:00 2001 From: guillermo-delrio Date: Fri, 17 Nov 2023 12:38:29 +0100 Subject: [PATCH 113/141] Load assistant's IDs from separate env file --- .gitignore | 1 + .../definitions/boss-worker3/agents.yaml | 4 -- agents/manual_assistants/run.py | 37 ++++++++++++++----- 3 files changed, 28 insertions(+), 14 deletions(-) diff --git a/.gitignore b/.gitignore index ff57ac5..f475b0c 100644 --- a/.gitignore +++ b/.gitignore @@ -1,4 +1,5 @@ key_openai.txt /**/*/.env +/**/*/agents.env /**/*/__pycache__ .vscode \ No newline at end of file diff --git a/agents/manual_assistants/definitions/boss-worker3/agents.yaml b/agents/manual_assistants/definitions/boss-worker3/agents.yaml index 5cd1cde..8b206da 100644 --- a/agents/manual_assistants/definitions/boss-worker3/agents.yaml +++ b/agents/manual_assistants/definitions/boss-worker3/agents.yaml @@ -1,20 +1,16 @@ - name: "Boss" - id: "asst_xWMYPN4yIJfrhD7S9AzLI0TO" talksTo: ["USER", "Worker 1", "Worker 2", "Worker 3"] functions: ["assignTask"] innitMessage: "Explain how clouds are formed in 100 words or less" - name: "Worker 1" - id: "asst_akckTEWToGurRT3Zcp33Ahe8" talksTo: ["Boss", "Worker 2", "Worker 3"] channels: ["Worker"] functions: ["resolveTask", "broadcast"] - name: "Worker 2" - id: "asst_3r8hSgGMrjU2TVp9wqlkzsfx" talksTo: ["Boss", "Worker 1", "Worker 3"] channels: ["Worker"] functions: ["resolveTask", "broadcast"] - name: "Worker 3" - id: "asst_9u163B8jxCbJzGfBVFc2mMva" talksTo: ["Boss", "Worker 1", "Worker 2"] channels: ["Worker"] functions: ["resolveTask", "broadcast"] \ No newline at end of file diff --git a/agents/manual_assistants/run.py b/agents/manual_assistants/run.py index 52e1f62..9123a9e 100644 --- a/agents/manual_assistants/run.py +++ b/agents/manual_assistants/run.py @@ -20,32 +20,49 @@ client = OpenAI(api_key=api_key) # Setup argument parser -parser = argparse.ArgumentParser(description='Load agents configuration from a YAML file.') -parser.add_argument('agentsYAML', nargs='?', help='Path to the agents YAML file.') +parser = argparse.ArgumentParser(description='Load agents configuration its configuration folder.') +parser.add_argument('agentsDefinitionFolder', nargs='?', help='Path to the agents definition folder. Should contain a "agent.yaml" file') # Parse arguments args = parser.parse_args() # Check if the agents.yaml file path is provided -if args.agentsYAML is None: +if args.agentsDefinitionFolder is None: parser.print_help() sys.exit(1) # Construct the absolute path to the agents.yaml file -yaml_file_path = os.path.join(pathlib.Path(__file__).parent.resolve(), args.agentsYAML) +workDir = pathlib.Path(__file__).parent.resolve() +agentsYAML = os.path.join(workDir, args.agentsDefinitionFolder, "agents.yaml") # Check if the provided file path exists -if not os.path.isfile(yaml_file_path): - print(f"Error: The file {yaml_file_path} does not exist.") +if not os.path.isfile(agentsYAML): + print(f"Error: The file {agentsYAML} does not exist.") sys.exit(1) -with open(yaml_file_path, 'r') as stream: - agent_properties = yaml.safe_load(stream) - agents = [Agent(properties) for properties in agent_properties] +with open(agentsYAML, 'r') as stream: + agentsProperties = yaml.safe_load(stream) + agents = [Agent(properties) for properties in agentsProperties] + +ctx = Context(client, agents) + +# LOAD ENV IDs +agentsEnv = os.path.join(workDir, args.agentsDefinitionFolder, "agents.env") +if os.path.isfile(agentsEnv): + with open(agentsEnv, 'r') as stream: + envProperties = yaml.safe_load(stream) + for properties in envProperties: # For each agent + for agent in agents: # Find its definition + if agent.name == properties['name']: + if not hasattr(agent, 'id'): # If ID is not hardcoded set it + agent.id = properties['id'] print(f"Agents: {agents}") -ctx = Context(client, agents) +# Create new assistants +for agent in agents: + if not hasattr(agent, 'id'): # It's a new agent + print("create assistant") # TODO network.build(ctx) threading.Thread(target=agentProcessor.processPendingActions, args=(ctx,)).start() From f53ea73ca48cba1b0abfc4dbde75323de99fec55 Mon Sep 17 00:00:00 2001 From: guillermo-delrio Date: Fri, 17 Nov 2023 12:41:44 +0100 Subject: [PATCH 114/141] Fix typo --- agents/manual_assistants/agent.py | 2 +- agents/manual_assistants/definitions/boss-worker3/agents.yaml | 2 +- agents/manual_assistants/run.py | 4 ++-- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/agents/manual_assistants/agent.py b/agents/manual_assistants/agent.py index 27b48e6..c849ce4 100644 --- a/agents/manual_assistants/agent.py +++ b/agents/manual_assistants/agent.py @@ -4,7 +4,7 @@ class Agent: talksTo: list[str] channels: list[str] functions: list[str] - innitMessage: str + initMessage: str def __init__(self, properties): # Initialize all properties from the dictionary diff --git a/agents/manual_assistants/definitions/boss-worker3/agents.yaml b/agents/manual_assistants/definitions/boss-worker3/agents.yaml index 8b206da..aa485c0 100644 --- a/agents/manual_assistants/definitions/boss-worker3/agents.yaml +++ b/agents/manual_assistants/definitions/boss-worker3/agents.yaml @@ -1,7 +1,7 @@ - name: "Boss" talksTo: ["USER", "Worker 1", "Worker 2", "Worker 3"] functions: ["assignTask"] - innitMessage: "Explain how clouds are formed in 100 words or less" + initMessage: "Explain how clouds are formed in 100 words or less" - name: "Worker 1" talksTo: ["Boss", "Worker 2", "Worker 3"] channels: ["Worker"] diff --git a/agents/manual_assistants/run.py b/agents/manual_assistants/run.py index 9123a9e..8d458da 100644 --- a/agents/manual_assistants/run.py +++ b/agents/manual_assistants/run.py @@ -70,5 +70,5 @@ threading.Thread(target=agentProcessor.processThread, args=(ctx, agent,)).start() for agent in agents: - if hasattr(agent, 'innitMessage'): - ctx.queues[agent.name].put(agent.innitMessage) \ No newline at end of file + if hasattr(agent, 'initMessage'): + ctx.queues[agent.name].put(agent.initMessage) \ No newline at end of file From 4ce858f83cff2b4434f1864ab5882ad421a9a9c2 Mon Sep 17 00:00:00 2001 From: David Shapiro Date: Fri, 17 Nov 2023 12:19:30 -0500 Subject: [PATCH 115/141] Update contributing.md --- contributing.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/contributing.md b/contributing.md index 6463265..c3cca6a 100644 --- a/contributing.md +++ b/contributing.md @@ -18,4 +18,6 @@ Thank you for your interest in contributing to the Hierarchical Autonomous Agent 2. **Adhere to the C3P0 Policy**: We follow the Collaborative Culture Community Policy: Zero Tolerance (C3P0) for harmful behavior and time-wasting. [C3P0 Policy](https://github.com/daveshap/C3P0). -3. **PR Requirements**: All PRs must include a clear description. Limit submissions to one PR per day, ensuring it adheres to the project's style and structure. Refraining from reformatting, refactoring, or restructuring the project is crucial—non-compliant PRs will be rejected. \ No newline at end of file +3. **PR Requirements**: All PRs must include a clear description. Limit submissions to one PR per day, ensuring it adheres to the project's style and structure. Refraining from reformatting, refactoring, or restructuring the project is crucial—non-compliant PRs will be rejected. + +4. **Examples and Demos**: All PRs must include examples and demos along with the code. This can be documented in a README, a link to a video, or screenshots. But we need to ensure that it's clear we understand what the code does. From 61c6bd37b03d3b7783d8a7a6b0df7b09a2dd1551 Mon Sep 17 00:00:00 2001 From: guillermo-delrio Date: Sun, 19 Nov 2023 18:04:06 +0100 Subject: [PATCH 116/141] Add req.txt and update parameer name --- agents/manual_assistants/README.md | 22 ++++++++++++++++++++++ agents/manual_assistants/requirements.txt | 1 + agents/manual_assistants/run.py | 8 ++++---- 3 files changed, 27 insertions(+), 4 deletions(-) create mode 100644 agents/manual_assistants/requirements.txt diff --git a/agents/manual_assistants/README.md b/agents/manual_assistants/README.md index ce36d0c..199e5f2 100644 --- a/agents/manual_assistants/README.md +++ b/agents/manual_assistants/README.md @@ -1,6 +1,28 @@ # Objective This folder includes some initial tests on how function calling may enable HAAS communication and cognition. A network with a limited number of agents is created as a test to identify issues and help guide the architecture: one boss talking to three worker agents. +# How to execute +``` +pip install -r requirements.txt +# If you alredy had the openai module installed you may need to: +# pip install openai --upgrade + +# Create a .env file on this path containing the OpenAI API key you wish to use. https://platform.openai.com/api-keys +# OPENAI_API_KEY=sk-********** + +python run.py definitions/boss-worker3/ +``` + + + + + + + + + + + # Observations ## Choosing what to propagate The simplest approach when connecting multiple agents is to have the downstream agents get all the messages from their source nodes. This limits the capacity of the model greatly. diff --git a/agents/manual_assistants/requirements.txt b/agents/manual_assistants/requirements.txt new file mode 100644 index 0000000..f0dd0ae --- /dev/null +++ b/agents/manual_assistants/requirements.txt @@ -0,0 +1 @@ +openai \ No newline at end of file diff --git a/agents/manual_assistants/run.py b/agents/manual_assistants/run.py index 8d458da..20f1ede 100644 --- a/agents/manual_assistants/run.py +++ b/agents/manual_assistants/run.py @@ -20,14 +20,14 @@ client = OpenAI(api_key=api_key) # Setup argument parser -parser = argparse.ArgumentParser(description='Load agents configuration its configuration folder.') -parser.add_argument('agentsDefinitionFolder', nargs='?', help='Path to the agents definition folder. Should contain a "agent.yaml" file') +parser = argparse.ArgumentParser(description='Load agents configuration from its configuration folder.') +parser.add_argument('--agents-definition-folder', dest='agentsDefinitionFolder', required=False, help='Path to the agents definition folder. Should contain an "agent.yaml" file') # Parse arguments args = parser.parse_args() -# Check if the agents.yaml file path is provided -if args.agentsDefinitionFolder is None: +# Check if the agents-definition-folder argument was passed +if not args.agentsDefinitionFolder: parser.print_help() sys.exit(1) From e09e141d960ad0599d5a724cb97923e7cdf6134b Mon Sep 17 00:00:00 2001 From: guillermo-delrio Date: Sun, 19 Nov 2023 22:37:48 +0100 Subject: [PATCH 117/141] Update readme --- agents/manual_assistants/README.md | 12 +----------- 1 file changed, 1 insertion(+), 11 deletions(-) diff --git a/agents/manual_assistants/README.md b/agents/manual_assistants/README.md index 199e5f2..6d0d628 100644 --- a/agents/manual_assistants/README.md +++ b/agents/manual_assistants/README.md @@ -10,19 +10,9 @@ pip install -r requirements.txt # Create a .env file on this path containing the OpenAI API key you wish to use. https://platform.openai.com/api-keys # OPENAI_API_KEY=sk-********** -python run.py definitions/boss-worker3/ +python run.py --agents-definition-folder definitions/boss-worker3/ ``` - - - - - - - - - - # Observations ## Choosing what to propagate The simplest approach when connecting multiple agents is to have the downstream agents get all the messages from their source nodes. This limits the capacity of the model greatly. From ebb7ba30fcca42fc799b55edb7f0e18df2ac6b1b Mon Sep 17 00:00:00 2001 From: guillermo-delrio Date: Mon, 20 Nov 2023 01:09:28 +0100 Subject: [PATCH 118/141] Automate the creation of assistants based off the definition files --- .gitignore | 2 +- agents/manual_assistants/OAIWrapper.py | 19 +++++++ agents/manual_assistants/agent.py | 16 ++++-- agents/manual_assistants/agentEnvHandler.py | 10 ++++ .../agentFunctions/__init__.py | 4 -- .../agentFunctions/assignTask.py | 15 ----- .../agentFunctions/broadcast.py | 21 ------- .../agentFunctions/resolveTask.py | 24 -------- .../agentFunctions/sendMessage.py | 19 ------- agents/manual_assistants/agentProcessor.py | 18 ++++-- .../{agentFunctions => agentTools}/README.md | 0 .../manual_assistants/agentTools/__init__.py | 4 ++ .../agentTools/assignTask.py | 38 +++++++++++++ .../manual_assistants/agentTools/broadcast.py | 46 +++++++++++++++ .../agentTools/resolveTask.py | 42 ++++++++++++++ .../agentTools/sendMessage.py | 43 ++++++++++++++ .../definitions/boss-worker3/agents.yaml | 56 +++++++++++++++++-- agents/manual_assistants/execution.py | 20 +++++++ agents/manual_assistants/run.py | 20 +++++-- 19 files changed, 313 insertions(+), 104 deletions(-) create mode 100644 agents/manual_assistants/OAIWrapper.py create mode 100644 agents/manual_assistants/agentEnvHandler.py delete mode 100644 agents/manual_assistants/agentFunctions/__init__.py delete mode 100644 agents/manual_assistants/agentFunctions/assignTask.py delete mode 100644 agents/manual_assistants/agentFunctions/broadcast.py delete mode 100644 agents/manual_assistants/agentFunctions/resolveTask.py delete mode 100644 agents/manual_assistants/agentFunctions/sendMessage.py rename agents/manual_assistants/{agentFunctions => agentTools}/README.md (100%) create mode 100644 agents/manual_assistants/agentTools/__init__.py create mode 100644 agents/manual_assistants/agentTools/assignTask.py create mode 100644 agents/manual_assistants/agentTools/broadcast.py create mode 100644 agents/manual_assistants/agentTools/resolveTask.py create mode 100644 agents/manual_assistants/agentTools/sendMessage.py create mode 100644 agents/manual_assistants/execution.py diff --git a/.gitignore b/.gitignore index f475b0c..2c30aa9 100644 --- a/.gitignore +++ b/.gitignore @@ -1,5 +1,5 @@ key_openai.txt /**/*/.env -/**/*/agents.env +/**/*/agentsIds.env /**/*/__pycache__ .vscode \ No newline at end of file diff --git a/agents/manual_assistants/OAIWrapper.py b/agents/manual_assistants/OAIWrapper.py new file mode 100644 index 0000000..807592c --- /dev/null +++ b/agents/manual_assistants/OAIWrapper.py @@ -0,0 +1,19 @@ +from agent import Agent +from openai import OpenAI +from agentTools import * + +def createAssistant(client: OpenAI, agent: Agent): + toolList=[] + if hasattr(agent, "tools"): + for tool in agent.tools: + toolClass=globals().get(tool, None) + toolDict={"type": "function", "function": toolClass.definition} + toolList.append(toolDict) + print(toolList) + assistant = client.beta.assistants.create( + name=agent.name, + instructions=agent.instructions, + tools=toolList, + model=agent.model + ) + agent.id=assistant.id \ No newline at end of file diff --git a/agents/manual_assistants/agent.py b/agents/manual_assistants/agent.py index c849ce4..a161dab 100644 --- a/agents/manual_assistants/agent.py +++ b/agents/manual_assistants/agent.py @@ -1,13 +1,21 @@ class Agent: + # From OAI assistant's API name: str id: str + instructions: str + tools: list[str] + model: str + + # Custom talksTo: list[str] channels: list[str] - functions: list[str] initMessage: str - - def __init__(self, properties): - # Initialize all properties from the dictionary + + def __init__(self, properties): + # Set default values + self.model="gpt-4-1106-preview" + + # Overwrite with provided values from YAML for key, value in properties.items(): setattr(self, key, value) diff --git a/agents/manual_assistants/agentEnvHandler.py b/agents/manual_assistants/agentEnvHandler.py new file mode 100644 index 0000000..e707a73 --- /dev/null +++ b/agents/manual_assistants/agentEnvHandler.py @@ -0,0 +1,10 @@ +from agent import Agent +import yaml + +def saveId(agentsIdsFile: str, agent: Agent): + with open(agentsIdsFile, 'r') as file: + data = yaml.safe_load(file) or [] + + data.extend([{"name": agent.name, "id": agent.id}]) + with open(agentsIdsFile, 'w') as file: + yaml.dump(data, file) \ No newline at end of file diff --git a/agents/manual_assistants/agentFunctions/__init__.py b/agents/manual_assistants/agentFunctions/__init__.py deleted file mode 100644 index 69abee3..0000000 --- a/agents/manual_assistants/agentFunctions/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .sendMessage import sendMessage -from .broadcast import broadcast -from .assignTask import assignTask -from .resolveTask import resolveTask \ No newline at end of file diff --git a/agents/manual_assistants/agentFunctions/assignTask.py b/agents/manual_assistants/agentFunctions/assignTask.py deleted file mode 100644 index 81fc310..0000000 --- a/agents/manual_assistants/agentFunctions/assignTask.py +++ /dev/null @@ -1,15 +0,0 @@ -from context import Context -from agent import Agent - -def assignTask(ctx: Context, agent: Agent, actionId: str, arguments: {}, threadId: str, runId: str): - print(f"[{agent.name}]>[ASSIGN TASK {actionId}]>[{arguments['assignee']}] {arguments['task']}") - ctx.pendingActions.append({ - "id": actionId, - "agent": agent.name, - "threadId": threadId, - "runId": runId, - "outputs": {}}) - - ctx.agentsWaitingForActions.append(agent.name) - - ctx.queues[arguments['assignee']].put(f"Task id: {actionId}\n{arguments['task']}") \ No newline at end of file diff --git a/agents/manual_assistants/agentFunctions/broadcast.py b/agents/manual_assistants/agentFunctions/broadcast.py deleted file mode 100644 index cfb6487..0000000 --- a/agents/manual_assistants/agentFunctions/broadcast.py +++ /dev/null @@ -1,21 +0,0 @@ -from context import Context -from agent import Agent - -def broadcast(ctx: Context, agent: Agent, arguments: {}, actionId: str): - if hasattr(agent, 'channels') and (arguments['channel'] in agent.channels): - for channel in ctx.channels: - if channel['name'] == arguments['channel']: - print(f"[{agent.name}]->({arguments['channel']}) {arguments['message']}") - for recipient in channel['agents']: - if recipient != agent.name: # Do not queue the message on the agent that sent in - ctx.queues[recipient].put(arguments['message']) - return { - "tool_call_id": actionId, - "output": "Message sent" - } - else: - print(f"[{agent.name}] ERROR unkown channel {arguments['channel']}") - return { - "tool_call_id": actionId, - "output": "Unkown channel" - } \ No newline at end of file diff --git a/agents/manual_assistants/agentFunctions/resolveTask.py b/agents/manual_assistants/agentFunctions/resolveTask.py deleted file mode 100644 index e4b3d82..0000000 --- a/agents/manual_assistants/agentFunctions/resolveTask.py +++ /dev/null @@ -1,24 +0,0 @@ -import os -from context import Context -from agent import Agent - -def resolveTask(ctx: Context, agent: Agent, arguments: {}): - print(f"[{agent.name}]>[RESOLVE TASK {arguments['id']}] {arguments['result']}") - os._exit(0) - # outputs = [] - # outputs.append({ - # "tool_call_id": arguments['id'], - # "output": arguments['result'] - # }) - # for pendingAction in ctx.pendingActions: - # if pendingAction['id'] == arguments['id']: - # ctx.client.beta.threads.runs.submit_tool_outputs( - # thread_id=pendingAction['threadId'], - # run_id=pendingAction['runId'], - # tool_outputs=outputs - # ) - - # return { - # "tool_call_id": ctx.action.id, - # "output": "Task resolved" - # } \ No newline at end of file diff --git a/agents/manual_assistants/agentFunctions/sendMessage.py b/agents/manual_assistants/agentFunctions/sendMessage.py deleted file mode 100644 index e3d6401..0000000 --- a/agents/manual_assistants/agentFunctions/sendMessage.py +++ /dev/null @@ -1,19 +0,0 @@ -import os -from context import Context -from agent import Agent - -def sendMessage(ctx: Context, agent: Agent, arguments: {}): - if hasattr(agent, 'talksTo') and (arguments['recipient'] in agent.talksTo): - if arguments['recipient'] == "USER": - print(f"[{ctx.agent.name}] Result: {arguments['message']}") - os._exit(0) - else: - print(f"[{ctx.agent.name}]->[{arguments['recipient']}] {arguments['message']}") - - ctx.queues[arguments['recipient']].put(arguments['message']) - return { - "tool_call_id": ctx.action.id, - "output": "Message sent" - } - else: - print(f"[{agent.name}] ERROR unkown recipient {arguments['recipient']}") \ No newline at end of file diff --git a/agents/manual_assistants/agentProcessor.py b/agents/manual_assistants/agentProcessor.py index fd5997f..ed32762 100644 --- a/agents/manual_assistants/agentProcessor.py +++ b/agents/manual_assistants/agentProcessor.py @@ -1,8 +1,10 @@ import time import json -import agentFunctions +import agentTools from context import Context from agent import Agent +import os +from execution import Execution def processPendingActions(ctx: Context): while True: @@ -25,7 +27,7 @@ def processThread(ctx: Context, agent: Agent): thread = ctx.client.beta.threads.create() print(f"[{agent.name}] Thread {thread.id}") - + print(f"https://platform.openai.com/playground?mode=assistant&assistant={agent.id}&thread={thread.id}") print("") queue = ctx.queues[agent.name] waitingForMessages = True @@ -77,17 +79,19 @@ def processThread(ctx: Context, agent: Agent): outputs = [] submitOutput=True for action in run.required_action.submit_tool_outputs.tool_calls: + function_name = action.function.name arguments = json.loads(action.function.arguments) + execution = Execution(threadId=thread.id, runId=run.id, actionId=action.id, arguments=arguments) if function_name == 'sendMessage': - output = agentFunctions.sendMessage(ctx, agent, arguments) + output = agentTools.sendMessage.execute(ctx, agent, execution) elif function_name == 'broadcast': - output = agentFunctions.broadcast(ctx, agent, arguments, action.id) + output = agentTools.broadcast.execute(ctx, agent, execution) elif function_name == 'assignTask': - output = agentFunctions.assignTask(ctx, agent, action.id, arguments, thread.id, run.id) + output = agentTools.assignTask.execute(ctx, agent, execution) submitOutput=False elif function_name == 'resolveTask': - output = agentFunctions.resolveTask(ctx, agent, arguments) + output = agentTools.resolveTask.execute(ctx, agent, execution) else: print(f"[{agent.name}] ERROR unkown function {function_name}") output = { @@ -102,6 +106,8 @@ def processThread(ctx: Context, agent: Agent): run_id=run.id, tool_outputs=outputs ) + if execution.exit: + os._exit(0) if ctx.lock.locked(): ctx.lock.release() print(f"[{agent.name}] RELEASES LOCK") diff --git a/agents/manual_assistants/agentFunctions/README.md b/agents/manual_assistants/agentTools/README.md similarity index 100% rename from agents/manual_assistants/agentFunctions/README.md rename to agents/manual_assistants/agentTools/README.md diff --git a/agents/manual_assistants/agentTools/__init__.py b/agents/manual_assistants/agentTools/__init__.py new file mode 100644 index 0000000..650546b --- /dev/null +++ b/agents/manual_assistants/agentTools/__init__.py @@ -0,0 +1,4 @@ +from .sendMessage import execute +from .broadcast import execute +from .assignTask import execute +from .resolveTask import execute \ No newline at end of file diff --git a/agents/manual_assistants/agentTools/assignTask.py b/agents/manual_assistants/agentTools/assignTask.py new file mode 100644 index 0000000..fef4756 --- /dev/null +++ b/agents/manual_assistants/agentTools/assignTask.py @@ -0,0 +1,38 @@ +from context import Context +from agent import Agent +from execution import Execution + +definition=\ + { + "name": "assignTask", + "description": "Assign a task to the worker agents", + "parameters": { + "type": "object", + "properties": { + "assignee": { + "type": "string", + "description": "Name of the agent assigned to this task" + }, + "task": { + "type": "string", + "description": "Description of the task" + } + }, + "required": [ + "description" + ] + } + } + +def execute(ctx: Context, agent: Agent, execution: Execution): + print(f"[{agent.name}]>[ASSIGN TASK {execution.actionId}]>[{execution.arguments['assignee']}] {execution.arguments['task']}") + ctx.pendingActions.append({ + "id": execution.actionId, + "agent": agent.name, + "threadId": execution.threadId, + "runId": execution.runId, + "outputs": {}}) + + ctx.agentsWaitingForActions.append(agent.name) + + ctx.queues[execution.arguments['assignee']].put(f"Task id: {execution.actionId}\n{execution.arguments['task']}") \ No newline at end of file diff --git a/agents/manual_assistants/agentTools/broadcast.py b/agents/manual_assistants/agentTools/broadcast.py new file mode 100644 index 0000000..caae313 --- /dev/null +++ b/agents/manual_assistants/agentTools/broadcast.py @@ -0,0 +1,46 @@ +from context import Context +from agent import Agent +from execution import Execution + +definition=\ + { + "name": "broadcast", + "description": "Broadcast a message on a channel", + "parameters": { + "type": "object", + "properties": { + "channel": { + "type": "string", + "description": "Channel name to broadcast the message to" + }, + "message": { + "type": "string", + "description": "Message to broadcast" + } + }, + "required": [ + "channel", + "message" + ] + } + } + + +def execute(ctx: Context, agent: Agent, execution: Execution): + if hasattr(agent, 'channels') and (execution.arguments['channel'] in agent.channels): + for channel in ctx.channels: + if channel['name'] == execution.arguments['channel']: + print(f"[{agent.name}]->({execution.arguments['channel']}) {execution.arguments['message']}") + for recipient in channel['agents']: + if recipient != agent.name: # Do not queue the message on the agent that sent in + ctx.queues[recipient].put(execution.arguments['message']) + return { + "tool_call_id": execution.actionId, + "output": "Message sent" + } + else: + print(f"[{agent.name}] ERROR unkown channel {execution.arguments['channel']}") + return { + "tool_call_id": execution.actionId, + "output": "Unkown channel" + } \ No newline at end of file diff --git a/agents/manual_assistants/agentTools/resolveTask.py b/agents/manual_assistants/agentTools/resolveTask.py new file mode 100644 index 0000000..e320f66 --- /dev/null +++ b/agents/manual_assistants/agentTools/resolveTask.py @@ -0,0 +1,42 @@ +from context import Context +from agent import Agent +from execution import Execution + +definition=\ + { + "name": "resolveTask", + "description": "Send final task results to the boss agent", + "parameters": { + "type": "object", + "properties": { + "id": { + "type": "string", + "description": "Task id provided when the task was assigned" + }, + "result": { + "type": "string", + "description": "Result of the task" + } + }, + "required": [ + "description" + ] + } + } + + +def execute(ctx: Context, agent: Agent, execution: Execution): + print(f"[{agent.name}]>[RESOLVE TASK {execution.arguments['id']}] {execution.arguments['result']}") + outputs = [] + outputs.append({ + "tool_call_id": execution.arguments['id'], + "output": execution.arguments['result'] + }) + for pendingAction in ctx.pendingActions: + if pendingAction['id'] == execution.arguments['id']: + pendingAction['outout']=outputs + execution.exit = True + return { + "tool_call_id": execution.actionId, + "output": "Task resolved" + } \ No newline at end of file diff --git a/agents/manual_assistants/agentTools/sendMessage.py b/agents/manual_assistants/agentTools/sendMessage.py new file mode 100644 index 0000000..de5d9f2 --- /dev/null +++ b/agents/manual_assistants/agentTools/sendMessage.py @@ -0,0 +1,43 @@ +import os +from context import Context +from agent import Agent +from execution import Execution + +definition=\ + { + "name": "sendMessage", + "description": "Send a message to another agent", + "parameters": { + "type": "object", + "properties": { + "recipient": { + "type": "string", + "description": "Agent name to send the message to" + }, + "message": { + "type": "string", + "description": "Message to send" + } + }, + "required": [ + "recipient", + "message" + ] + } + } + +def execute(ctx: Context, agent: Agent, execution: Execution): + if hasattr(agent, 'talksTo') and (execution.arguments['recipient'] in agent.talksTo): + if execution.arguments['recipient'] == "USER": + print(f"[{ctx.agent.name}] Result: {execution.arguments['message']}") + os._exit(0) + else: + print(f"[{ctx.agent.name}]->[{execution.arguments['recipient']}] {execution.arguments['message']}") + + ctx.queues[execution.arguments['recipient']].put(execution.arguments['message']) + return { + "tool_call_id": ctx.action.id, + "output": "Message sent" + } + else: + print(f"[{agent.name}] ERROR unkown recipient {execution.arguments['recipient']}") \ No newline at end of file diff --git a/agents/manual_assistants/definitions/boss-worker3/agents.yaml b/agents/manual_assistants/definitions/boss-worker3/agents.yaml index aa485c0..1cbd97b 100644 --- a/agents/manual_assistants/definitions/boss-worker3/agents.yaml +++ b/agents/manual_assistants/definitions/boss-worker3/agents.yaml @@ -1,16 +1,64 @@ - name: "Boss" + tools: ["assignTask"] talksTo: ["USER", "Worker 1", "Worker 2", "Worker 3"] - functions: ["assignTask"] initMessage: "Explain how clouds are formed in 100 words or less" + instructions: > + MISSION + - You are a boss agent in charge of three worker agents. + - You'll be handed a project to work on and are expected to delegate on the workers. + - Send tasks to the workers one a time. They will collaborate on the tasks you provide and get back to you. + - Wait for a worker response before sending another task. + - Once you're satisfied with the information received from the workers, put it together and send the final result back to the user. + + INSTRUCTIONS + - Complete the task in your mission. + - To talk to other agents call the function 'sendMessage'. At the beginning of th message identify yourself. + - Agents: ["USER", "Worker 1", "Worker 2", "Worker 3"] - name: "Worker 1" + tools: ["resolveTask", "broadcast"] talksTo: ["Boss", "Worker 2", "Worker 3"] channels: ["Worker"] - functions: ["resolveTask", "broadcast"] + instructions: > + MISSION + You are "Worker 1", one of three identical worker agents under a boss agent. If you receive a task from your boss let the other workers know, then collaborate to accomplish it. Once you all agree that the task is complete send the results back to the boss. + + INSTRUCTIONS + - Complete the task in your mission. + - To talk to other worker agents call the function 'broadcast'. At the beginning of the message identify yourself. + - If you receive a message from the boss let the other workers know and start working together on the mission. Make sure to pass the task id provided by the boss. + - If you receive a message from other workers don't reply back unless necessary. Keep the worker channel as free from noise as possible. Share results in the channel to advance the mission, but do not send acknowledgements. + - Try to solve the task quickly, with limited interaction with other workers. + - To send the task results back to the boss call the function 'resolveTask'. Pass the id recieved from the boss when the task was assigned. + - Channels: [{'name': 'Worker', 'agents': ['Worker 1', 'Worker 2', 'Worker 3']}] - name: "Worker 2" + tools: ["resolveTask", "broadcast"] talksTo: ["Boss", "Worker 1", "Worker 3"] channels: ["Worker"] - functions: ["resolveTask", "broadcast"] + instructions: > + MISSION + You are "Worker 2", one of three identical worker agents under a boss agent. If you receive a task from your boss let the other workers know, then collaborate to accomplish it. Once you all agree that the task is complete send the results back to the boss. + + INSTRUCTIONS + - Complete the task in your mission. + - To talk to other worker agents call the function 'broadcast'. At the beginning of the message identify yourself. + - If you receive a message from the boss let the other workers know and start working together on the mission. Make sure to pass the task id provided by the boss. + - If you receive a message from other workers don't reply back unless necessary. Keep the worker channel as free from noise as possible. Share results in the channel to advance the mission, but do not send acknowledgements. + - Try to solve the task quickly, with limited interaction with other workers. + - To send the task results back to the boss call the function 'resolveTask'. Pass the id recieved from the boss when the task was assigned. + - Channels: [{'name': 'Worker', 'agents': ['Worker 1', 'Worker 2', 'Worker 3']}] - name: "Worker 3" + tools: ["resolveTask", "broadcast"] talksTo: ["Boss", "Worker 1", "Worker 2"] channels: ["Worker"] - functions: ["resolveTask", "broadcast"] \ No newline at end of file + instructions: > + MISSION + You are "Worker 3", one of three identical worker agents under a boss agent. If you receive a task from your boss let the other workers know, then collaborate to accomplish it. Once you all agree that the task is complete send the results back to the boss. + + INSTRUCTIONS + - Complete the task in your mission. + - To talk to other worker agents call the function 'broadcast'. At the beginning of the message identify yourself. + - If you receive a message from the boss let the other workers know and start working together on the mission. Make sure to pass the task id provided by the boss. + - If you receive a message from other workers don't reply back unless necessary. Keep the worker channel as free from noise as possible. Share results in the channel to advance the mission, but do not send acknowledgements. + - Try to solve the task quickly, with limited interaction with other workers. + - To send the task results back to the boss call the function 'resolveTask'. Pass the id recieved from the boss when the task was assigned. + - Channels: [{'name': 'Worker', 'agents': ['Worker 1', 'Worker 2', 'Worker 3']}] \ No newline at end of file diff --git a/agents/manual_assistants/execution.py b/agents/manual_assistants/execution.py new file mode 100644 index 0000000..1155a3f --- /dev/null +++ b/agents/manual_assistants/execution.py @@ -0,0 +1,20 @@ +class Execution: + threadId: str + runId: str + actionId: str + arguments: {} + exit: bool + + def __init__(self, threadId: str, runId: str, actionId: str, arguments: {}): + self.threadId = threadId + self.runId = runId + self.actionId = actionId + self.arguments = arguments + self.exit = False + + def __str__(self): + properties_str = ', '.join(f'{key}: {value}' for key, value in self.__dict__.items()) + return f'Execution({properties_str})' + + def __repr__(self): + return self.__str__() \ No newline at end of file diff --git a/agents/manual_assistants/run.py b/agents/manual_assistants/run.py index 20f1ede..0acfc15 100644 --- a/agents/manual_assistants/run.py +++ b/agents/manual_assistants/run.py @@ -11,6 +11,8 @@ import network from agent import Agent import agentProcessor +import OAIWrapper +import agentEnvHandler dotenv.load_dotenv() api_key = os.getenv('OPENAI_API_KEY') @@ -47,11 +49,15 @@ ctx = Context(client, agents) # LOAD ENV IDs -agentsEnv = os.path.join(workDir, args.agentsDefinitionFolder, "agents.env") -if os.path.isfile(agentsEnv): - with open(agentsEnv, 'r') as stream: - envProperties = yaml.safe_load(stream) - for properties in envProperties: # For each agent +agentsIdsFile = os.path.join(workDir, args.agentsDefinitionFolder, "agentsIds.env") +# Ensure the file exists by opening it in append mode, then immediately close it +with open(agentsIdsFile, 'a'): + pass + +with open(agentsIdsFile, 'r') as stream: + agentsIds = yaml.safe_load(stream) + if agentsIds: + for properties in agentsIds: # For each agent for agent in agents: # Find its definition if agent.name == properties['name']: if not hasattr(agent, 'id'): # If ID is not hardcoded set it @@ -59,10 +65,12 @@ print(f"Agents: {agents}") + # Create new assistants for agent in agents: if not hasattr(agent, 'id'): # It's a new agent - print("create assistant") # TODO + OAIWrapper.createAssistant(client, agent) + agentEnvHandler.saveId(agentsIdsFile, agent) network.build(ctx) threading.Thread(target=agentProcessor.processPendingActions, args=(ctx,)).start() From 8894a5f5cf28c0a966c38344a9060a817dd1cfb4 Mon Sep 17 00:00:00 2001 From: guillermo-delrio Date: Mon, 20 Nov 2023 01:54:06 +0100 Subject: [PATCH 119/141] Improve pending action handling --- agents/manual_assistants/agentProcessor.py | 90 +++++++++---------- .../agentTools/assignTask.py | 10 +-- agents/manual_assistants/context.py | 1 - agents/manual_assistants/execution.py | 13 +-- agents/manual_assistants/run.py | 7 +- agents/manual_assistants/toolStatus.py | 15 ++++ 6 files changed, 72 insertions(+), 64 deletions(-) create mode 100644 agents/manual_assistants/toolStatus.py diff --git a/agents/manual_assistants/agentProcessor.py b/agents/manual_assistants/agentProcessor.py index ed32762..79d2cba 100644 --- a/agents/manual_assistants/agentProcessor.py +++ b/agents/manual_assistants/agentProcessor.py @@ -6,34 +6,36 @@ import os from execution import Execution -def processPendingActions(ctx: Context): - while True: - for action in ctx.pendingActions: - if action['outputs']: # Output already set - ctx.client.beta.threads.runs.submit_tool_outputs( - thread_id=action['threadId'], - run_id=action['runId'], - tool_outputs=action['outputs'] - ) - ctx.agentsWaitingForActions.remove(action['agent']) - time.sleep(1) +class AgentProcessor: + execution: Execution + + def __init__(self): + self.execution = Execution() + + def processThread(self, ctx: Context, agent: Agent): + messages = [] + + print(f"[{agent.name}] Id: {agent.id}") + if hasattr(agent, 'talksTo'): + print(f"[{agent.name}] Talks to: {agent.talksTo}") + + self.execution.threadId = ctx.client.beta.threads.create().id + print(f"[{agent.name}] Thread {self.execution.threadId}") + print(f"https://platform.openai.com/playground?mode=assistant&assistant={agent.id}&thread={self.execution.threadId}") + print("") + queue = ctx.queues[agent.name] + waitingForMessages = True + while True: -def processThread(ctx: Context, agent: Agent): - messages = [] - - print(f"[{agent.name}] Id: {agent.id}") - if hasattr(agent, 'talksTo'): - print(f"[{agent.name}] Talks to: {agent.talksTo}") - - thread = ctx.client.beta.threads.create() - print(f"[{agent.name}] Thread {thread.id}") - print(f"https://platform.openai.com/playground?mode=assistant&assistant={agent.id}&thread={thread.id}") - print("") - queue = ctx.queues[agent.name] - waitingForMessages = True - while True: - if agent.name not in ctx.agentsWaitingForActions: - if waitingForMessages: + if self.execution.toolStatus.waiting: + if self.execution.toolStatus.output: + ctx.client.beta.threads.runs.submit_tool_outputs( + thread_id=self.execution.threadId, + run_id=self.execution.runId, + tool_outputs=self.execution.toolStatus.output + ) + self.execution.toolStatus.waiting=False + elif waitingForMessages: message = queue.get(block=True) if message is not None: ctx.lock.acquire() @@ -42,23 +44,23 @@ def processThread(ctx: Context, agent: Agent): # print(f"[{agent['name']}] Recieved: {message}") messages.append(message) ctx.client.beta.threads.messages.create( - thread_id=thread.id, + thread_id=self.execution.threadId, content=message, role='user' ) run = ctx.client.beta.threads.runs.create( - thread_id=thread.id, + thread_id=self.execution.threadId, assistant_id=agent.id ) - + self.execution.runId=run.id else: - run = ctx.client.beta.threads.runs.retrieve(thread_id=thread.id, run_id=run.id) + run = ctx.client.beta.threads.runs.retrieve(thread_id=self.execution.threadId, run_id=self.execution.runId) if run.status == 'completed': waitingForMessages = True message_list = ctx.client.beta.threads.messages.list( - thread_id=thread.id + thread_id=self.execution.threadId ) retrievedMessages = [] for datum in message_list.data: @@ -79,19 +81,18 @@ def processThread(ctx: Context, agent: Agent): outputs = [] submitOutput=True for action in run.required_action.submit_tool_outputs.tool_calls: - + self.execution.actionId=action.id + self.execution.arguments=json.loads(action.function.arguments) function_name = action.function.name - arguments = json.loads(action.function.arguments) - execution = Execution(threadId=thread.id, runId=run.id, actionId=action.id, arguments=arguments) if function_name == 'sendMessage': - output = agentTools.sendMessage.execute(ctx, agent, execution) + output = agentTools.sendMessage.execute(ctx, agent, self.execution) elif function_name == 'broadcast': - output = agentTools.broadcast.execute(ctx, agent, execution) + output = agentTools.broadcast.execute(ctx, agent, self.execution) elif function_name == 'assignTask': - output = agentTools.assignTask.execute(ctx, agent, execution) + output = agentTools.assignTask.execute(ctx, agent, self.execution) submitOutput=False elif function_name == 'resolveTask': - output = agentTools.resolveTask.execute(ctx, agent, execution) + output = agentTools.resolveTask.execute(ctx, agent, self.execution) else: print(f"[{agent.name}] ERROR unkown function {function_name}") output = { @@ -102,14 +103,13 @@ def processThread(ctx: Context, agent: Agent): outputs.append(output) if submitOutput: ctx.client.beta.threads.runs.submit_tool_outputs( - thread_id=thread.id, - run_id=run.id, + thread_id=self.execution.threadId, + run_id=self.execution.runId, tool_outputs=outputs ) - if execution.exit: + if self.execution.exit: os._exit(0) if ctx.lock.locked(): ctx.lock.release() - print(f"[{agent.name}] RELEASES LOCK") - - time.sleep(1) \ No newline at end of file + print(f"[{agent.name}] RELEASES LOCK") + time.sleep(1) \ No newline at end of file diff --git a/agents/manual_assistants/agentTools/assignTask.py b/agents/manual_assistants/agentTools/assignTask.py index fef4756..afd03b9 100644 --- a/agents/manual_assistants/agentTools/assignTask.py +++ b/agents/manual_assistants/agentTools/assignTask.py @@ -26,13 +26,5 @@ def execute(ctx: Context, agent: Agent, execution: Execution): print(f"[{agent.name}]>[ASSIGN TASK {execution.actionId}]>[{execution.arguments['assignee']}] {execution.arguments['task']}") - ctx.pendingActions.append({ - "id": execution.actionId, - "agent": agent.name, - "threadId": execution.threadId, - "runId": execution.runId, - "outputs": {}}) - - ctx.agentsWaitingForActions.append(agent.name) - + execution.toolStatus.waiting=True ctx.queues[execution.arguments['assignee']].put(f"Task id: {execution.actionId}\n{execution.arguments['task']}") \ No newline at end of file diff --git a/agents/manual_assistants/context.py b/agents/manual_assistants/context.py index e428550..e181992 100644 --- a/agents/manual_assistants/context.py +++ b/agents/manual_assistants/context.py @@ -9,7 +9,6 @@ def __init__(self, client: openai.Client, agents: [Agent]): self.queues = {} self.agents = agents self.pendingActions = [] - self.agentsWaitingForActions = [] self.channels = [] self.lock = threading.Lock() self.outputs = [] diff --git a/agents/manual_assistants/execution.py b/agents/manual_assistants/execution.py index 1155a3f..acba9e7 100644 --- a/agents/manual_assistants/execution.py +++ b/agents/manual_assistants/execution.py @@ -1,20 +1,21 @@ +from toolStatus import ToolStatus + class Execution: threadId: str runId: str actionId: str arguments: {} exit: bool + toolStatus: ToolStatus - def __init__(self, threadId: str, runId: str, actionId: str, arguments: {}): - self.threadId = threadId - self.runId = runId - self.actionId = actionId - self.arguments = arguments + def __init__(self): self.exit = False + self.toolStatus = ToolStatus() def __str__(self): properties_str = ', '.join(f'{key}: {value}' for key, value in self.__dict__.items()) return f'Execution({properties_str})' def __repr__(self): - return self.__str__() \ No newline at end of file + return self.__str__() + diff --git a/agents/manual_assistants/run.py b/agents/manual_assistants/run.py index 0acfc15..6b734ec 100644 --- a/agents/manual_assistants/run.py +++ b/agents/manual_assistants/run.py @@ -10,7 +10,7 @@ from context import Context import network from agent import Agent -import agentProcessor +from agentProcessor import AgentProcessor import OAIWrapper import agentEnvHandler @@ -73,9 +73,10 @@ agentEnvHandler.saveId(agentsIdsFile, agent) network.build(ctx) -threading.Thread(target=agentProcessor.processPendingActions, args=(ctx,)).start() + for agent in agents: - threading.Thread(target=agentProcessor.processThread, args=(ctx, agent,)).start() + processor = AgentProcessor() + threading.Thread(target=processor.processThread, args=(ctx, agent,)).start() for agent in agents: if hasattr(agent, 'initMessage'): diff --git a/agents/manual_assistants/toolStatus.py b/agents/manual_assistants/toolStatus.py new file mode 100644 index 0000000..9053147 --- /dev/null +++ b/agents/manual_assistants/toolStatus.py @@ -0,0 +1,15 @@ +class ToolStatus: + waiting: bool + output: {} + + def __init__(self): + self.waiting = False + self.output = {} + + def __str__(self): + properties_str = ', '.join(f'{key}: {value}' for key, value in self.__dict__.items()) + return f'Execution({properties_str})' + + def __repr__(self): + return self.__str__() + \ No newline at end of file From ad348598d14f7c16998076a896a5ce8c12a01762 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Guillermo=20del=20R=C3=ADo?= <40954603+guillermo-delrio@users.noreply.github.com> Date: Tue, 21 Nov 2023 19:15:13 +0100 Subject: [PATCH 120/141] Fix typo Co-authored-by: samgriek <75908439+samgriek@users.noreply.github.com> --- agents/manual_assistants/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/agents/manual_assistants/README.md b/agents/manual_assistants/README.md index 6d0d628..6968cf7 100644 --- a/agents/manual_assistants/README.md +++ b/agents/manual_assistants/README.md @@ -4,7 +4,7 @@ This folder includes some initial tests on how function calling may enable HAAS # How to execute ``` pip install -r requirements.txt -# If you alredy had the openai module installed you may need to: +# If you already had the openai module installed you may need to: # pip install openai --upgrade # Create a .env file on this path containing the OpenAI API key you wish to use. https://platform.openai.com/api-keys From 10c58e3e801a0f662e30389501fef9e79d92dfb5 Mon Sep 17 00:00:00 2001 From: gd464376_gsk Date: Tue, 21 Nov 2023 19:32:06 +0100 Subject: [PATCH 121/141] Fix module --- agents/manual_assistants/agentProcessor.py | 8 ++++---- agents/manual_assistants/agentTools/__init__.py | 8 ++++---- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/agents/manual_assistants/agentProcessor.py b/agents/manual_assistants/agentProcessor.py index 79d2cba..fd83ee0 100644 --- a/agents/manual_assistants/agentProcessor.py +++ b/agents/manual_assistants/agentProcessor.py @@ -85,14 +85,14 @@ def processThread(self, ctx: Context, agent: Agent): self.execution.arguments=json.loads(action.function.arguments) function_name = action.function.name if function_name == 'sendMessage': - output = agentTools.sendMessage.execute(ctx, agent, self.execution) + output = agentTools.sendMessage(ctx, agent, self.execution) elif function_name == 'broadcast': - output = agentTools.broadcast.execute(ctx, agent, self.execution) + output = agentTools.broadcast(ctx, agent, self.execution) elif function_name == 'assignTask': - output = agentTools.assignTask.execute(ctx, agent, self.execution) + output = agentTools.assignTask(ctx, agent, self.execution) submitOutput=False elif function_name == 'resolveTask': - output = agentTools.resolveTask.execute(ctx, agent, self.execution) + output = agentTools.resolveTask(ctx, agent, self.execution) else: print(f"[{agent.name}] ERROR unkown function {function_name}") output = { diff --git a/agents/manual_assistants/agentTools/__init__.py b/agents/manual_assistants/agentTools/__init__.py index 650546b..465e77e 100644 --- a/agents/manual_assistants/agentTools/__init__.py +++ b/agents/manual_assistants/agentTools/__init__.py @@ -1,4 +1,4 @@ -from .sendMessage import execute -from .broadcast import execute -from .assignTask import execute -from .resolveTask import execute \ No newline at end of file +from .sendMessage import execute as sendMessage +from .broadcast import execute as broadcast +from .assignTask import execute as assignTask +from .resolveTask import execute as resolveTask \ No newline at end of file From 17707742895170f75682ba0f65cee045b246f843 Mon Sep 17 00:00:00 2001 From: Chad Phillips Date: Wed, 22 Nov 2023 15:01:04 -0500 Subject: [PATCH 122/141] add logging class, integrate HTTP POST to debug interface --- agents/manual_assistants/agentProcessor.py | 33 ++++++------ .../manual_assistants/agentTools/__init__.py | 8 +-- .../agentTools/assignTask.py | 6 ++- .../manual_assistants/agentTools/broadcast.py | 8 +-- .../agentTools/resolveTask.py | 6 ++- .../agentTools/sendMessage.py | 9 ++-- agents/manual_assistants/logger.py | 52 +++++++++++++++++++ agents/manual_assistants/requirements.txt | 3 +- 8 files changed, 93 insertions(+), 32 deletions(-) create mode 100644 agents/manual_assistants/logger.py diff --git a/agents/manual_assistants/agentProcessor.py b/agents/manual_assistants/agentProcessor.py index fd83ee0..17b1814 100644 --- a/agents/manual_assistants/agentProcessor.py +++ b/agents/manual_assistants/agentProcessor.py @@ -5,6 +5,7 @@ from agent import Agent import os from execution import Execution +from logger import AgentLogger class AgentProcessor: execution: Execution @@ -13,16 +14,16 @@ def __init__(self): self.execution = Execution() def processThread(self, ctx: Context, agent: Agent): + self.log = AgentLogger(agent.name, agent) messages = [] - print(f"[{agent.name}] Id: {agent.id}") + self.log.info(f"Id: {agent.id}") if hasattr(agent, 'talksTo'): - print(f"[{agent.name}] Talks to: {agent.talksTo}") + self.log.info(f"Talks to: {agent.talksTo}") self.execution.threadId = ctx.client.beta.threads.create().id - print(f"[{agent.name}] Thread {self.execution.threadId}") - print(f"https://platform.openai.com/playground?mode=assistant&assistant={agent.id}&thread={self.execution.threadId}") - print("") + self.log.info(f"Thread {self.execution.threadId}") + self.log.info(f"https://platform.openai.com/playground?mode=assistant&assistant={agent.id}&thread={self.execution.threadId}") queue = ctx.queues[agent.name] waitingForMessages = True while True: @@ -39,9 +40,9 @@ def processThread(self, ctx: Context, agent: Agent): message = queue.get(block=True) if message is not None: ctx.lock.acquire() - print(f"[{agent.name}] ACQUIRES LOCK") + self.log.info("ACQUIRES LOCK") waitingForMessages = False - # print(f"[{agent['name']}] Recieved: {message}") + # self.log.info(f"Recieved: {message}") messages.append(message) ctx.client.beta.threads.messages.create( thread_id=self.execution.threadId, @@ -72,11 +73,11 @@ def processThread(self, ctx: Context, agent: Agent): while i < len(retrievedMessages): retrievedMessage=retrievedMessages[i] messages.append(retrievedMessage) - print(f"[{agent.name}] Message: {retrievedMessage}") + self.log.info(f"Message: {retrievedMessage}") i+=1 if ctx.lock.locked(): ctx.lock.release() - print(f"[{agent.name}] RELEASES LOCK") + self.log.info("RELEASES LOCK") elif run.status == 'requires_action': outputs = [] submitOutput=True @@ -85,16 +86,16 @@ def processThread(self, ctx: Context, agent: Agent): self.execution.arguments=json.loads(action.function.arguments) function_name = action.function.name if function_name == 'sendMessage': - output = agentTools.sendMessage(ctx, agent, self.execution) + output = agentTools.sendMessageFunction(ctx, agent, self.execution) elif function_name == 'broadcast': - output = agentTools.broadcast(ctx, agent, self.execution) + output = agentTools.broadcastFunction(ctx, agent, self.execution) elif function_name == 'assignTask': - output = agentTools.assignTask(ctx, agent, self.execution) + output = agentTools.assignTaskFunction(ctx, agent, self.execution) submitOutput=False elif function_name == 'resolveTask': - output = agentTools.resolveTask(ctx, agent, self.execution) + output = agentTools.resolveTaskFunction(ctx, agent, self.execution) else: - print(f"[{agent.name}] ERROR unkown function {function_name}") + self.log.error(f"Unkown function {function_name}") output = { "tool_call_id": action.id, "output": "Unkown function" @@ -111,5 +112,5 @@ def processThread(self, ctx: Context, agent: Agent): os._exit(0) if ctx.lock.locked(): ctx.lock.release() - print(f"[{agent.name}] RELEASES LOCK") - time.sleep(1) \ No newline at end of file + self.log.info("RELEASES LOCK") + time.sleep(1) diff --git a/agents/manual_assistants/agentTools/__init__.py b/agents/manual_assistants/agentTools/__init__.py index 465e77e..8610e55 100644 --- a/agents/manual_assistants/agentTools/__init__.py +++ b/agents/manual_assistants/agentTools/__init__.py @@ -1,4 +1,4 @@ -from .sendMessage import execute as sendMessage -from .broadcast import execute as broadcast -from .assignTask import execute as assignTask -from .resolveTask import execute as resolveTask \ No newline at end of file +from .sendMessage import execute as sendMessageFunction +from .broadcast import execute as broadcastFunction +from .assignTask import execute as assignTaskFunction +from .resolveTask import execute as resolveTaskFunction diff --git a/agents/manual_assistants/agentTools/assignTask.py b/agents/manual_assistants/agentTools/assignTask.py index afd03b9..5dae293 100644 --- a/agents/manual_assistants/agentTools/assignTask.py +++ b/agents/manual_assistants/agentTools/assignTask.py @@ -1,6 +1,7 @@ from context import Context from agent import Agent from execution import Execution +from logger import AgentLogger definition=\ { @@ -25,6 +26,7 @@ } def execute(ctx: Context, agent: Agent, execution: Execution): - print(f"[{agent.name}]>[ASSIGN TASK {execution.actionId}]>[{execution.arguments['assignee']}] {execution.arguments['task']}") + log = AgentLogger(agent.name, agent) + log.info(f"[ASSIGN TASK {execution.actionId}]>[{execution.arguments['assignee']}] {execution.arguments['task']}", extra={'action_id': execution.actionId, 'task': execution.arguments['task'], 'assignee': execution.arguments['assignee']}) execution.toolStatus.waiting=True - ctx.queues[execution.arguments['assignee']].put(f"Task id: {execution.actionId}\n{execution.arguments['task']}") \ No newline at end of file + ctx.queues[execution.arguments['assignee']].put(f"Task id: {execution.actionId}\n{execution.arguments['task']}") diff --git a/agents/manual_assistants/agentTools/broadcast.py b/agents/manual_assistants/agentTools/broadcast.py index caae313..8acf189 100644 --- a/agents/manual_assistants/agentTools/broadcast.py +++ b/agents/manual_assistants/agentTools/broadcast.py @@ -1,6 +1,7 @@ from context import Context from agent import Agent from execution import Execution +from logger import AgentLogger definition=\ { @@ -27,10 +28,11 @@ def execute(ctx: Context, agent: Agent, execution: Execution): + log = AgentLogger(agent.name, agent) if hasattr(agent, 'channels') and (execution.arguments['channel'] in agent.channels): for channel in ctx.channels: if channel['name'] == execution.arguments['channel']: - print(f"[{agent.name}]->({execution.arguments['channel']}) {execution.arguments['message']}") + log.info(f"({execution.arguments['channel']}) {execution.arguments['message']}", extra={'broadcast_channel': execution.arguments['channel']}) for recipient in channel['agents']: if recipient != agent.name: # Do not queue the message on the agent that sent in ctx.queues[recipient].put(execution.arguments['message']) @@ -39,8 +41,8 @@ def execute(ctx: Context, agent: Agent, execution: Execution): "output": "Message sent" } else: - print(f"[{agent.name}] ERROR unkown channel {execution.arguments['channel']}") + log.error(f"Unkown channel {execution.arguments['channel']}", extra={'channel': execution.arguments['channel']}) return { "tool_call_id": execution.actionId, "output": "Unkown channel" - } \ No newline at end of file + } diff --git a/agents/manual_assistants/agentTools/resolveTask.py b/agents/manual_assistants/agentTools/resolveTask.py index e320f66..8f862d2 100644 --- a/agents/manual_assistants/agentTools/resolveTask.py +++ b/agents/manual_assistants/agentTools/resolveTask.py @@ -1,6 +1,7 @@ from context import Context from agent import Agent from execution import Execution +from logger import AgentLogger definition=\ { @@ -26,7 +27,8 @@ def execute(ctx: Context, agent: Agent, execution: Execution): - print(f"[{agent.name}]>[RESOLVE TASK {execution.arguments['id']}] {execution.arguments['result']}") + log = AgentLogger(agent.name, agent) + log.info(f"[RESOLVE TASK {execution.arguments['id']}] {execution.arguments['result']}", extra={'result': execution.arguments['result']}) outputs = [] outputs.append({ "tool_call_id": execution.arguments['id'], @@ -39,4 +41,4 @@ def execute(ctx: Context, agent: Agent, execution: Execution): return { "tool_call_id": execution.actionId, "output": "Task resolved" - } \ No newline at end of file + } diff --git a/agents/manual_assistants/agentTools/sendMessage.py b/agents/manual_assistants/agentTools/sendMessage.py index de5d9f2..f5488fe 100644 --- a/agents/manual_assistants/agentTools/sendMessage.py +++ b/agents/manual_assistants/agentTools/sendMessage.py @@ -2,6 +2,7 @@ from context import Context from agent import Agent from execution import Execution +from logger import AgentLogger definition=\ { @@ -27,17 +28,17 @@ } def execute(ctx: Context, agent: Agent, execution: Execution): + log = AgentLogger(agent.name, agent) if hasattr(agent, 'talksTo') and (execution.arguments['recipient'] in agent.talksTo): if execution.arguments['recipient'] == "USER": - print(f"[{ctx.agent.name}] Result: {execution.arguments['message']}") + log.info(f"Result: {execution.arguments['message']}", extra={'result': execution.arguments['message']}) os._exit(0) else: - print(f"[{ctx.agent.name}]->[{execution.arguments['recipient']}] {execution.arguments['message']}") - + log.info(f"[{execution.arguments['recipient']}] {execution.arguments['message']}", extra={'recipient': execution.arguments['recipient']}) ctx.queues[execution.arguments['recipient']].put(execution.arguments['message']) return { "tool_call_id": ctx.action.id, "output": "Message sent" } else: - print(f"[{agent.name}] ERROR unkown recipient {execution.arguments['recipient']}") \ No newline at end of file + log.error(f"Unkown recipient {execution.arguments['recipient']}") diff --git a/agents/manual_assistants/logger.py b/agents/manual_assistants/logger.py new file mode 100644 index 0000000..3e060c5 --- /dev/null +++ b/agents/manual_assistants/logger.py @@ -0,0 +1,52 @@ +import logging +import requests +from logging import Handler, LogRecord +from agent import Agent + +DEFAULT_DEBUG_ENDPOINT = 'http://localhost:8000/send-message/' + +DEFAULT_LOG_FORMAT = "%(name)s - %(levelname)s - %(message)s" +DUMMY_LOG_RECORD = LogRecord(name="", level=0, pathname="", lineno=0, msg="", args=(), exc_info=None) +LOG_DEFAULT_ATTRS = set(vars(DUMMY_LOG_RECORD).keys()) + + +class HTTPDebuggerHandler(Handler): + def __init__(self, agent: Agent, url: str = DEFAULT_DEBUG_ENDPOINT): + super().__init__() + self.agent = agent + self.url = url + + def emit(self, record): + data = { + 'name': self.agent.id, + 'label': self.agent.name, + 'model': self.agent.model, + 'log_level': record.levelname, + 'message': record.getMessage(), + } + extra_attrs = {key: value for key, value in record.__dict__.items() if key not in LOG_DEFAULT_ATTRS} + data.update(extra_attrs) + try: + requests.post(self.url, json=data) + except requests.exceptions.RequestException: + # Silently ignore request exceptions, allows operation without an active debugger. + pass + except Exception: + self.handleError(record) + + +class AgentLogger: + def __new__(cls, name, agent: Agent): + logger = logging.getLogger(name) + # Prevent duplicate loggers. + if logger.hasHandlers(): + return logger + logger.setLevel(logging.DEBUG) + log_console_handler = logging.StreamHandler() + log_console_handler.setFormatter(logging.Formatter(DEFAULT_LOG_FORMAT)) + log_console_handler.setLevel(logging.DEBUG) + logger.addHandler(log_console_handler) + http_debugger_handler = HTTPDebuggerHandler(agent) + http_debugger_handler.setLevel(logging.DEBUG) + logger.addHandler(http_debugger_handler) + return logger diff --git a/agents/manual_assistants/requirements.txt b/agents/manual_assistants/requirements.txt index f0dd0ae..8485a3c 100644 --- a/agents/manual_assistants/requirements.txt +++ b/agents/manual_assistants/requirements.txt @@ -1 +1,2 @@ -openai \ No newline at end of file +openai +requests From 9c6de7286485d630175fddf9bedcc36cf97d3849 Mon Sep 17 00:00:00 2001 From: Chad Phillips Date: Fri, 24 Nov 2023 10:51:54 -0500 Subject: [PATCH 123/141] refactor function handling - OAI function definitions are now parsed from function class defintions - A function manager handles loading functions from a list of specified directories - Initial support for Langchain functions has been added (untested) --- agents/manual_assistants/OAIWrapper.py | 15 +- agents/manual_assistants/agentProcessor.py | 36 +-- agents/manual_assistants/agentTools/README.md | 176 +++++++-------- .../manual_assistants/agentTools/__init__.py | 4 - .../agentTools/assignTask.py | 32 --- .../manual_assistants/agentTools/broadcast.py | 48 ---- .../agentTools/functions/assign_task.py | 22 ++ .../agentTools/functions/broadcast.py | 37 ++++ .../agentTools/functions/resolve_task.py | 33 +++ .../agentTools/functions/send_message.py | 37 ++++ .../agentTools/resolveTask.py | 44 ---- .../agentTools/sendMessage.py | 44 ---- .../definitions/boss-worker3/agents.yaml | 18 +- agents/manual_assistants/doc_parser.py | 113 ++++++++++ agents/manual_assistants/function.py | 49 +++++ agents/manual_assistants/function_manager.py | 208 ++++++++++++++++++ agents/manual_assistants/logger.py | 14 ++ agents/manual_assistants/requirements.txt | 2 + agents/manual_assistants/run.py | 9 +- agents/manual_assistants/util.py | 31 +++ 20 files changed, 678 insertions(+), 294 deletions(-) delete mode 100644 agents/manual_assistants/agentTools/__init__.py delete mode 100644 agents/manual_assistants/agentTools/assignTask.py delete mode 100644 agents/manual_assistants/agentTools/broadcast.py create mode 100644 agents/manual_assistants/agentTools/functions/assign_task.py create mode 100644 agents/manual_assistants/agentTools/functions/broadcast.py create mode 100644 agents/manual_assistants/agentTools/functions/resolve_task.py create mode 100644 agents/manual_assistants/agentTools/functions/send_message.py delete mode 100644 agents/manual_assistants/agentTools/resolveTask.py delete mode 100644 agents/manual_assistants/agentTools/sendMessage.py create mode 100644 agents/manual_assistants/doc_parser.py create mode 100644 agents/manual_assistants/function.py create mode 100644 agents/manual_assistants/function_manager.py create mode 100644 agents/manual_assistants/util.py diff --git a/agents/manual_assistants/OAIWrapper.py b/agents/manual_assistants/OAIWrapper.py index 807592c..6cd7d92 100644 --- a/agents/manual_assistants/OAIWrapper.py +++ b/agents/manual_assistants/OAIWrapper.py @@ -1,14 +1,15 @@ from agent import Agent from openai import OpenAI -from agentTools import * +from function_manager import FunctionManager -def createAssistant(client: OpenAI, agent: Agent): - toolList=[] + +def createAssistant(client: OpenAI, agent: Agent, function_manager: FunctionManager): + toolList = [] if hasattr(agent, "tools"): for tool in agent.tools: - toolClass=globals().get(tool, None) - toolDict={"type": "function", "function": toolClass.definition} - toolList.append(toolDict) + if function_manager.function_exists(tool): + toolDict = {"type": "function", "function": function_manager.get_function_config(tool)} + toolList.append(toolDict) print(toolList) assistant = client.beta.assistants.create( name=agent.name, @@ -16,4 +17,4 @@ def createAssistant(client: OpenAI, agent: Agent): tools=toolList, model=agent.model ) - agent.id=assistant.id \ No newline at end of file + agent.id = assistant.id diff --git a/agents/manual_assistants/agentProcessor.py b/agents/manual_assistants/agentProcessor.py index 17b1814..2d28bc5 100644 --- a/agents/manual_assistants/agentProcessor.py +++ b/agents/manual_assistants/agentProcessor.py @@ -7,11 +7,13 @@ from execution import Execution from logger import AgentLogger + class AgentProcessor: execution: Execution - def __init__(self): - self.execution = Execution() + def __init__(self, function_manager): + self.execution = Execution() + self.function_manager = function_manager def processThread(self, ctx: Context, agent: Agent): self.log = AgentLogger(agent.name, agent) @@ -80,28 +82,30 @@ def processThread(self, ctx: Context, agent: Agent): self.log.info("RELEASES LOCK") elif run.status == 'requires_action': outputs = [] - submitOutput=True + submitOutput = True for action in run.required_action.submit_tool_outputs.tool_calls: - self.execution.actionId=action.id - self.execution.arguments=json.loads(action.function.arguments) + self.execution.actionId = action.id + self.execution.arguments = json.loads(action.function.arguments) function_name = action.function.name - if function_name == 'sendMessage': - output = agentTools.sendMessageFunction(ctx, agent, self.execution) - elif function_name == 'broadcast': - output = agentTools.broadcastFunction(ctx, agent, self.execution) - elif function_name == 'assignTask': - output = agentTools.assignTaskFunction(ctx, agent, self.execution) - submitOutput=False - elif function_name == 'resolveTask': - output = agentTools.resolveTaskFunction(ctx, agent, self.execution) + if self.function_manager.function_exists(function_name): + if function_name == 'assign_task': + submitOutput = False + success, output, user_message = self.function_manager.run_function(function_name, self.execution.arguments, ctx, agent, self.execution) else: self.log.error(f"Unkown function {function_name}") output = { "tool_call_id": action.id, "output": "Unkown function" - } + } + if not success: + error_message = f"Error running function {function_name}: {user_message}" + self.log.error(error_message) + output = { + "tool_call_id": action.id, + "output": error_message, + } if output: - outputs.append(output) + outputs.append(output) if submitOutput: ctx.client.beta.threads.runs.submit_tool_outputs( thread_id=self.execution.threadId, diff --git a/agents/manual_assistants/agentTools/README.md b/agents/manual_assistants/agentTools/README.md index 1cb6611..163bb59 100644 --- a/agents/manual_assistants/agentTools/README.md +++ b/agents/manual_assistants/agentTools/README.md @@ -1,87 +1,89 @@ -# Send message -{ - "name": "sendMessage", - "description": "Send a message to another agent", - "parameters": { - "type": "object", - "properties": { - "recipient": { - "type": "string", - "description": "Agent name to send the message to" - }, - "message": { - "type": "string", - "description": "Message to send" - } - }, - "required": [ - "recipient", "message" - ] - } -} - -# Broadcast -{ - "name": "broadcast", - "description": "Broadcast a message on a channel", - "parameters": { - "type": "object", - "properties": { - "channel": { - "type": "string", - "description": "Channel name to broadcast the message to" - }, - "message": { - "type": "string", - "description": "Message to broadcast" - } - }, - "required": [ - "channel", "message" - ] - } -} - -# Assign task -{ - "name": "assignTask", - "description": "Assign a task to the worker agents", - "parameters": { - "type": "object", - "properties": { - "assignee": { - "type": "string", - "description": "Name of the agent assigned to this task" - }, - "task": { - "type": "string", - "description": "Description of the task" - } - }, - "required": [ - "description" - ] - } -} - -# Resolve task -{ - "name": "resolveTask", - "description": "Send final task results to the boss agent", - "parameters": { - "type": "object", - "properties": { - "id": { - "type": "string", - "description": "Task id provided when the task was assigned" - }, - "result": { - "type": "string", - "description": "Result of the task" - } - }, - "required": [ - "description" - ] - } -} \ No newline at end of file +# Functions + +[OpenAI functions](https://platform.openai.com/docs/guides/gpt/function-calling) for all models that support it. + +Multiple functions may be attached to an agent. + +The example configuration below assumes you want to add a new function called `test_function`. + +Tools are assigned to an agent by including them in the `tools` property. + +```yaml +- name: "Boss" + tools: ["test_function"] +``` + +## Creating functions. + +Functions are created as callable Python classes, that inherit from the base `Function` class. + +The class name must be the camel-cased version of the snake-cased function name, so `test_function` becomes `TestFunction`. + +There is one required method to implement, `__call__`, its return value can techincally be anything, including no return, +but is generally a dict necessary to return a tool call. + +```python +from function import Function + +class TestFunction(Function): + def __call__(self, word: str, repeats: int, enclose_with: str = '') -> dict: + """ + Repeat the provided word a number of times. + + :param word: The word to repeat. + :type content: str + :param repeats: The number of times to repeat the word. + :type repeats: int + :param enclose_with: Optional string to enclose the final content. + :type enclose_with: str, optional + :return: A dictionary containing the repeated content. + :rtype: dict + """ + action_id = self.execution.actionId + try: + repeated_content = " ".join([word] * repeats) + enclosed_content = f"{enclose_with}{repeated_content}{enclose_with}" + output = { + "tool_call_id": action_id, + 'output': enclosed_content, + } + output = { + } + except Exception as e: + output = { + "tool_call_id": action_id, + 'output': f"ERROR: {str(e)}", + } + return output +``` + +The file should be named `[function_name].py`, e.g. `test_function.py`, and be placed in the `agentTools/functions` directory. + +## Providing the function definition + +In the example above, notice both the type hints in the function signature (e.g. `word: str`), +and the reStructured text documentation of the method arguments. +This is the default method for providing the function definition to the OpenAI API. + +Alternatively, you may provide the function definition by creating a `[function_name].config.yaml` file in the same location as the +`[function_name].py` file, e.g. `test_function.config.yaml` -- if provided, its contents will be used instead of the default +method. + +Finally, for full control, you may override the `get_config()` method of the base `Function` class, and return +a dictionary of the function definition. This approach allows passing much more robust function definitions to the LLM. + +## Support for Langchain tools + +[Langchain](https://docs.langchain.com) has many useful [tools](https://python.langchain.com/docs/modules/agents/tools/) +that can be used in function calls. + +To use a Langchain tool as function: + +1. Find the name of the tool class, e.g. `MoveFileTool` or `ShellTool`. +2. Prefix that class name with `Langchain-` +3. Add it to the `functions` list for the agent: + +```yaml +- name: "Boss" + tools: ["Langchain-ShellTool"] +``` diff --git a/agents/manual_assistants/agentTools/__init__.py b/agents/manual_assistants/agentTools/__init__.py deleted file mode 100644 index 8610e55..0000000 --- a/agents/manual_assistants/agentTools/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .sendMessage import execute as sendMessageFunction -from .broadcast import execute as broadcastFunction -from .assignTask import execute as assignTaskFunction -from .resolveTask import execute as resolveTaskFunction diff --git a/agents/manual_assistants/agentTools/assignTask.py b/agents/manual_assistants/agentTools/assignTask.py deleted file mode 100644 index 5dae293..0000000 --- a/agents/manual_assistants/agentTools/assignTask.py +++ /dev/null @@ -1,32 +0,0 @@ -from context import Context -from agent import Agent -from execution import Execution -from logger import AgentLogger - -definition=\ - { - "name": "assignTask", - "description": "Assign a task to the worker agents", - "parameters": { - "type": "object", - "properties": { - "assignee": { - "type": "string", - "description": "Name of the agent assigned to this task" - }, - "task": { - "type": "string", - "description": "Description of the task" - } - }, - "required": [ - "description" - ] - } - } - -def execute(ctx: Context, agent: Agent, execution: Execution): - log = AgentLogger(agent.name, agent) - log.info(f"[ASSIGN TASK {execution.actionId}]>[{execution.arguments['assignee']}] {execution.arguments['task']}", extra={'action_id': execution.actionId, 'task': execution.arguments['task'], 'assignee': execution.arguments['assignee']}) - execution.toolStatus.waiting=True - ctx.queues[execution.arguments['assignee']].put(f"Task id: {execution.actionId}\n{execution.arguments['task']}") diff --git a/agents/manual_assistants/agentTools/broadcast.py b/agents/manual_assistants/agentTools/broadcast.py deleted file mode 100644 index 8acf189..0000000 --- a/agents/manual_assistants/agentTools/broadcast.py +++ /dev/null @@ -1,48 +0,0 @@ -from context import Context -from agent import Agent -from execution import Execution -from logger import AgentLogger - -definition=\ - { - "name": "broadcast", - "description": "Broadcast a message on a channel", - "parameters": { - "type": "object", - "properties": { - "channel": { - "type": "string", - "description": "Channel name to broadcast the message to" - }, - "message": { - "type": "string", - "description": "Message to broadcast" - } - }, - "required": [ - "channel", - "message" - ] - } - } - - -def execute(ctx: Context, agent: Agent, execution: Execution): - log = AgentLogger(agent.name, agent) - if hasattr(agent, 'channels') and (execution.arguments['channel'] in agent.channels): - for channel in ctx.channels: - if channel['name'] == execution.arguments['channel']: - log.info(f"({execution.arguments['channel']}) {execution.arguments['message']}", extra={'broadcast_channel': execution.arguments['channel']}) - for recipient in channel['agents']: - if recipient != agent.name: # Do not queue the message on the agent that sent in - ctx.queues[recipient].put(execution.arguments['message']) - return { - "tool_call_id": execution.actionId, - "output": "Message sent" - } - else: - log.error(f"Unkown channel {execution.arguments['channel']}", extra={'channel': execution.arguments['channel']}) - return { - "tool_call_id": execution.actionId, - "output": "Unkown channel" - } diff --git a/agents/manual_assistants/agentTools/functions/assign_task.py b/agents/manual_assistants/agentTools/functions/assign_task.py new file mode 100644 index 0000000..fac6d63 --- /dev/null +++ b/agents/manual_assistants/agentTools/functions/assign_task.py @@ -0,0 +1,22 @@ +from function import Function +from logger import AgentLogger + + +class AssignTask(Function): + # Ignore for pytest. + __test__ = False + + def __call__(self, assignee: str, task: str) -> None: + """ + Assign a task to the worker agents. + + :param assignee: Name of the agent assigned to this task. + :type assignee: str + :param task: Description of the task. + :type task: str + """ + log = AgentLogger(self.agent.name, self.agent) + action_id = self.execution.actionId + log.info(f"[ASSIGN TASK {action_id}]>[{assignee}] {task}", extra={'action_id': action_id, 'task': task, 'assignee': assignee}) + self.execution.toolStatus.waiting = True + self.context.queues[assignee].put(f"Task id: {action_id}\n{task}") diff --git a/agents/manual_assistants/agentTools/functions/broadcast.py b/agents/manual_assistants/agentTools/functions/broadcast.py new file mode 100644 index 0000000..07d4d80 --- /dev/null +++ b/agents/manual_assistants/agentTools/functions/broadcast.py @@ -0,0 +1,37 @@ +from function import Function +from logger import AgentLogger + + +class Broadcast(Function): + # Ignore for pytest. + __test__ = False + + def __call__(self, channel_name: str, message: str) -> dict: + """ + Broadcast a message on a channel. + + :param channel_name: Channel name to broadcast the message to. + :type channel_name: str + :param message: Message to broadcast. + :type message: str + """ + log = AgentLogger(self.agent.name, self.agent) + action_id = self.execution.actionId + if hasattr(self.agent, 'channels') and (channel_name in self.agent.channels): + for channel in self.context.channels: + if channel['name'] == channel_name: + log.info(f"({channel_name}) {message}", extra={'broadcast_channel': channel_name}) + for recipient in channel['agents']: + if recipient != self.agent.name: # Do not queue the message on the agent that sent in + self.context.queues[recipient].put(message) + return { + "tool_call_id": action_id, + "output": f"Message sent to {channel_name}" + } + else: + message = f"Unkown channel {channel_name}" + log.error(message, extra={'channel': channel_name}) + return { + "tool_call_id": action_id, + "output": message + } diff --git a/agents/manual_assistants/agentTools/functions/resolve_task.py b/agents/manual_assistants/agentTools/functions/resolve_task.py new file mode 100644 index 0000000..fb74bab --- /dev/null +++ b/agents/manual_assistants/agentTools/functions/resolve_task.py @@ -0,0 +1,33 @@ +from function import Function +from logger import AgentLogger + + +class ResolveTask(Function): + # Ignore for pytest. + __test__ = False + + def __call__(self, id: str, result: str) -> dict: + """ + Send final task results to the boss agent. + + :param id: Task id provided when the task was assigned. + :type id: str + :param result: Result of the task. + :type result: str + """ + log = AgentLogger(self.agent.name, self.agent) + action_id = self.execution.actionId + log.info(f"[RESOLVE TASK {id}] {result}", extra={'result': result}) + outputs = [] + outputs.append({ + "tool_call_id": id, + "output": result, + }) + for pendingAction in self.context.pendingActions: + if pendingAction['id'] == id: + pendingAction['outout'] = outputs + self.execution.exit = True + return { + "tool_call_id": action_id, + "output": f"Task {id} resolved" + } diff --git a/agents/manual_assistants/agentTools/functions/send_message.py b/agents/manual_assistants/agentTools/functions/send_message.py new file mode 100644 index 0000000..c3f9480 --- /dev/null +++ b/agents/manual_assistants/agentTools/functions/send_message.py @@ -0,0 +1,37 @@ +import os +from function import Function +from logger import AgentLogger + + +class SendMessage(Function): + # Ignore for pytest. + __test__ = False + + def __call__(self, recipient: str, message: str) -> dict: + """ + Send a message to another agent. + + :param recipient: Agent name to send the message to. + :type recipient: str + :param message: Message to send. + :type message: str + """ + log = AgentLogger(self.agent.name, self.agent) + if hasattr(self.agent, 'talksTo') and (recipient in self.agent.talksTo): + if recipient == "USER": + log.info(f"Result: {message}", extra={'result': message}) + os._exit(0) + else: + log.info(f"[{recipient}] {message}", extra={'recipient': recipient}) + self.context.queues[recipient].put(message) + return { + "tool_call_id": self.context.action.id, + "output": f"Message sent to {recipient}" + } + else: + message = f"Unkown recipient {recipient}" + log.error(message) + return { + "tool_call_id": self.context.action.id, + "output": message + } diff --git a/agents/manual_assistants/agentTools/resolveTask.py b/agents/manual_assistants/agentTools/resolveTask.py deleted file mode 100644 index 8f862d2..0000000 --- a/agents/manual_assistants/agentTools/resolveTask.py +++ /dev/null @@ -1,44 +0,0 @@ -from context import Context -from agent import Agent -from execution import Execution -from logger import AgentLogger - -definition=\ - { - "name": "resolveTask", - "description": "Send final task results to the boss agent", - "parameters": { - "type": "object", - "properties": { - "id": { - "type": "string", - "description": "Task id provided when the task was assigned" - }, - "result": { - "type": "string", - "description": "Result of the task" - } - }, - "required": [ - "description" - ] - } - } - - -def execute(ctx: Context, agent: Agent, execution: Execution): - log = AgentLogger(agent.name, agent) - log.info(f"[RESOLVE TASK {execution.arguments['id']}] {execution.arguments['result']}", extra={'result': execution.arguments['result']}) - outputs = [] - outputs.append({ - "tool_call_id": execution.arguments['id'], - "output": execution.arguments['result'] - }) - for pendingAction in ctx.pendingActions: - if pendingAction['id'] == execution.arguments['id']: - pendingAction['outout']=outputs - execution.exit = True - return { - "tool_call_id": execution.actionId, - "output": "Task resolved" - } diff --git a/agents/manual_assistants/agentTools/sendMessage.py b/agents/manual_assistants/agentTools/sendMessage.py deleted file mode 100644 index f5488fe..0000000 --- a/agents/manual_assistants/agentTools/sendMessage.py +++ /dev/null @@ -1,44 +0,0 @@ -import os -from context import Context -from agent import Agent -from execution import Execution -from logger import AgentLogger - -definition=\ - { - "name": "sendMessage", - "description": "Send a message to another agent", - "parameters": { - "type": "object", - "properties": { - "recipient": { - "type": "string", - "description": "Agent name to send the message to" - }, - "message": { - "type": "string", - "description": "Message to send" - } - }, - "required": [ - "recipient", - "message" - ] - } - } - -def execute(ctx: Context, agent: Agent, execution: Execution): - log = AgentLogger(agent.name, agent) - if hasattr(agent, 'talksTo') and (execution.arguments['recipient'] in agent.talksTo): - if execution.arguments['recipient'] == "USER": - log.info(f"Result: {execution.arguments['message']}", extra={'result': execution.arguments['message']}) - os._exit(0) - else: - log.info(f"[{execution.arguments['recipient']}] {execution.arguments['message']}", extra={'recipient': execution.arguments['recipient']}) - ctx.queues[execution.arguments['recipient']].put(execution.arguments['message']) - return { - "tool_call_id": ctx.action.id, - "output": "Message sent" - } - else: - log.error(f"Unkown recipient {execution.arguments['recipient']}") diff --git a/agents/manual_assistants/definitions/boss-worker3/agents.yaml b/agents/manual_assistants/definitions/boss-worker3/agents.yaml index 1cbd97b..0800794 100644 --- a/agents/manual_assistants/definitions/boss-worker3/agents.yaml +++ b/agents/manual_assistants/definitions/boss-worker3/agents.yaml @@ -1,5 +1,5 @@ - name: "Boss" - tools: ["assignTask"] + tools: ["assign_task"] talksTo: ["USER", "Worker 1", "Worker 2", "Worker 3"] initMessage: "Explain how clouds are formed in 100 words or less" instructions: > @@ -12,10 +12,10 @@ INSTRUCTIONS - Complete the task in your mission. - - To talk to other agents call the function 'sendMessage'. At the beginning of th message identify yourself. + - To talk to other agents call the function 'send_message'. At the beginning of the message identify yourself. - Agents: ["USER", "Worker 1", "Worker 2", "Worker 3"] - name: "Worker 1" - tools: ["resolveTask", "broadcast"] + tools: ["resolve_task", "broadcast"] talksTo: ["Boss", "Worker 2", "Worker 3"] channels: ["Worker"] instructions: > @@ -28,10 +28,10 @@ - If you receive a message from the boss let the other workers know and start working together on the mission. Make sure to pass the task id provided by the boss. - If you receive a message from other workers don't reply back unless necessary. Keep the worker channel as free from noise as possible. Share results in the channel to advance the mission, but do not send acknowledgements. - Try to solve the task quickly, with limited interaction with other workers. - - To send the task results back to the boss call the function 'resolveTask'. Pass the id recieved from the boss when the task was assigned. + - To send the task results back to the boss call the function 'resolve_task'. Pass the id recieved from the boss when the task was assigned. - Channels: [{'name': 'Worker', 'agents': ['Worker 1', 'Worker 2', 'Worker 3']}] - name: "Worker 2" - tools: ["resolveTask", "broadcast"] + tools: ["resolve_task", "broadcast"] talksTo: ["Boss", "Worker 1", "Worker 3"] channels: ["Worker"] instructions: > @@ -44,10 +44,10 @@ - If you receive a message from the boss let the other workers know and start working together on the mission. Make sure to pass the task id provided by the boss. - If you receive a message from other workers don't reply back unless necessary. Keep the worker channel as free from noise as possible. Share results in the channel to advance the mission, but do not send acknowledgements. - Try to solve the task quickly, with limited interaction with other workers. - - To send the task results back to the boss call the function 'resolveTask'. Pass the id recieved from the boss when the task was assigned. + - To send the task results back to the boss call the function 'resolve_task'. Pass the id recieved from the boss when the task was assigned. - Channels: [{'name': 'Worker', 'agents': ['Worker 1', 'Worker 2', 'Worker 3']}] - name: "Worker 3" - tools: ["resolveTask", "broadcast"] + tools: ["resolve_task", "broadcast"] talksTo: ["Boss", "Worker 1", "Worker 2"] channels: ["Worker"] instructions: > @@ -60,5 +60,5 @@ - If you receive a message from the boss let the other workers know and start working together on the mission. Make sure to pass the task id provided by the boss. - If you receive a message from other workers don't reply back unless necessary. Keep the worker channel as free from noise as possible. Share results in the channel to advance the mission, but do not send acknowledgements. - Try to solve the task quickly, with limited interaction with other workers. - - To send the task results back to the boss call the function 'resolveTask'. Pass the id recieved from the boss when the task was assigned. - - Channels: [{'name': 'Worker', 'agents': ['Worker 1', 'Worker 2', 'Worker 3']}] \ No newline at end of file + - To send the task results back to the boss call the function 'resolve_task'. Pass the id recieved from the boss when the task was assigned. + - Channels: [{'name': 'Worker', 'agents': ['Worker 1', 'Worker 2', 'Worker 3']}] diff --git a/agents/manual_assistants/doc_parser.py b/agents/manual_assistants/doc_parser.py new file mode 100644 index 0000000..314b661 --- /dev/null +++ b/agents/manual_assistants/doc_parser.py @@ -0,0 +1,113 @@ +from typing import Dict, Any + +import inspect + +import docutils.parsers.rst +import docutils.utils +import docutils.frontend +import docutils.nodes + + +def type_mapping(dtype): + if dtype == float: + return "number" + elif dtype == int: + return "integer" + elif dtype == str: + return "string" + else: + return "string" + + +def merge_argument_attrs_from_doc(attrs, param_name, parsed_doc): + doc_attrs = parsed_doc.get(param_name) + description = "" + if doc_attrs: + description = doc_attrs.get("description", "") + attrs["description"] = description + return attrs + + +def func_to_openai_function_spec(name, func): + argspec = inspect.getfullargspec(func) + func_doc = inspect.getdoc(func) + parsed_doc = parse_docstring(func_doc) + func_description = parsed_doc.get("__description", "") + params = argspec.annotations + if "return" in params.keys(): + del params["return"] + for param_name in argspec.args: + if param_name == "self": + continue + params[param_name] = {"type": type_mapping(argspec.annotations[param_name])} + params[param_name] = merge_argument_attrs_from_doc( + params[param_name], param_name, parsed_doc + ) + len_optional_params = len(argspec.defaults) if argspec.defaults else None + return { + "name": name, + "description": func_description, + "parameters": {"type": "object", "properties": params}, + "required": argspec.args[1:-len_optional_params] + if len_optional_params + else argspec.args[1:], + } + + +def parse_rst(text: str) -> docutils.nodes.document: + parser = docutils.parsers.rst.Parser() + settings = docutils.frontend.get_default_settings(docutils.parsers.rst.Parser) + document = docutils.utils.new_document("", settings=settings) + parser.parse(text, document) + return document + + +def parse_type(type_str: str) -> Dict[str, Any]: + type_info = {"optional": False} + type_parts = type_str.split(",") + if "optional" in type_parts: + type_info["optional"] = True + type_parts.remove("optional") + type_info["type"] = eval(type_parts[0].strip()) + return type_info + + +def parse_docstring(docstring: str) -> Dict[str, Dict[str, Any]]: + document = parse_rst(docstring) + parsed_elements = {} + description = [] + description_complete = False + for elem in document.findall(): + if isinstance(elem, docutils.nodes.paragraph): + if not description_complete and ( + not elem.parent or not isinstance(elem.parent, docutils.nodes.field_list) + ): + description.append(elem.astext()) + elif isinstance(elem, docutils.nodes.field_name): + description_complete = True + field_name = elem.astext() + field_body = elem.parent.children[1].astext() + if field_name.startswith(("param", "type", "raises", "return", "rtype")): + try: + prefix, arg_name = field_name.split(" ", 1) + except ValueError: + prefix = field_name.strip() + arg_name = None + if arg_name and arg_name not in parsed_elements: + parsed_elements[arg_name] = {} + if prefix == "param": + parsed_elements[arg_name]["description"] = field_body + elif prefix == "type": + parsed_elements[arg_name].update(parse_type(field_body)) + elif prefix == "raises": + exception_type = arg_name + if prefix not in parsed_elements: + parsed_elements[prefix] = {} + parsed_elements[prefix]["description"] = field_body + parsed_elements[prefix]["type"] = eval(exception_type) + elif prefix == "return": + parsed_elements["return"] = {"description": field_body} + elif prefix == "rtype": + parsed_elements["return"].update(parse_type(field_body)) + parsed_elements["__description"] = " ".join(description) + return parsed_elements diff --git a/agents/manual_assistants/function.py b/agents/manual_assistants/function.py new file mode 100644 index 0000000..4447b52 --- /dev/null +++ b/agents/manual_assistants/function.py @@ -0,0 +1,49 @@ +from abc import abstractmethod + +import yaml + +from pathlib import Path + +from logger import Logger +from doc_parser import func_to_openai_function_spec + + +class Function: + def __init__(self): + self.log = Logger(self.__class__.__name__) + + def set_name(self, name): + self.name = name + + def set_filepath(self, filepath): + self.filepath = filepath + + def set_agent(self, agent): + self.agent = agent + + def set_context(self, context): + self.context = context + + def set_execution(self, execution): + self.execution = execution + + def get_config(self): + filepath = Path(self.filepath) + config_filepath = filepath.with_suffix(".config.yaml") + if config_filepath.is_file(): + try: + self.log.debug( + f"Loading configuration for {self.name} from filepath: {config_filepath}" + ) + with open(config_filepath, "r") as config_file: + config = yaml.safe_load(config_file) + self.log.debug(f"Loaded YAML configuration for {self.name}: {config}") + return config + except Exception as e: + self.log.error(f"Error loading configuration for {self.name}: {str(e)}") + raise ValueError(f"Failed to load configuration file for {self.name}") from e + return func_to_openai_function_spec(self.name, self.__call__) + + @abstractmethod + def __call__(self, **kwargs): + pass diff --git a/agents/manual_assistants/function_manager.py b/agents/manual_assistants/function_manager.py new file mode 100644 index 0000000..f24e411 --- /dev/null +++ b/agents/manual_assistants/function_manager.py @@ -0,0 +1,208 @@ +import os +import json +import importlib + +from pathlib import Path + +from logger import Logger +import util + +try: + import langchain.tools + HAS_LANGCHAIN = True +except ImportError: + HAS_LANGCHAIN = False + +LANGCHAIN_TOOL_PREFIX = "Langchain-" + + +class FunctionManager: + """ + Manage functions. + """ + + def __init__(self, additional_functions=None): + self.additional_functions = additional_functions or {} + self.log = Logger(self.__class__.__name__) + self.user_function_dirs = ( + util.get_environment_variable_list("function_dir") or [] + ) + self.make_user_function_dirs() + self.system_function_dirs = [ + os.path.join(util.get_file_directory(), "agentTools", "functions"), + ] + self.all_function_dirs = self.system_function_dirs + self.user_function_dirs + + def make_user_function_dirs(self): + for function_dir in self.user_function_dirs: + os.makedirs(function_dir, exist_ok=True) + + def load_function(self, function_name): + self.log.debug("Loading function from dirs: %s" % ", ".join(self.all_function_dirs)) + function_filepath = None + try: + for function_dir in self.all_function_dirs: + if os.path.exists(function_dir) and os.path.isdir(function_dir): + self.log.info(f"Processing directory: {function_dir}") + filename = f"{function_name}.py" + if filename in os.listdir(function_dir): + self.log.debug( + f"Loading function file {filename} from directory: {function_dir}" + ) + try: + filepath = os.path.join(function_dir, filename) + with open(filepath, "r") as _: + function_filepath = filepath + except Exception as e: + self.log.warning( + f"Can't open function file {function_name} from directory: {function_dir}: {e}" + ) + else: + message = f"Failed to load function {function_name}: Directory {function_dir!r} not found or not a directory" + self.log.error(message) + return False, None, message + except Exception as e: + message = f"An error occurred while loading function {function_name}: {e}" + self.log.error(message) + return False, None, message + if function_filepath is not None: + message = ( + f"Successfully loaded function file {function_name} from directory: {function_dir}" + ) + self.log.info(message) + return True, function_filepath, message + return False, None, f"Function {function_name} not found" + + def function_exists(self, function_name): + return function_name in self.functions + + def is_langchain_tool(self, function_name): + self.log.debug(f"Checking for Langchain tool: {function_name}") + return function_name.lower().startswith(LANGCHAIN_TOOL_PREFIX.lower()) + + def get_langchain_tool(self, function_name): + self.log.debug(f"Loading Langchain tool: {function_name}") + tool_name = util.remove_prefix(function_name, LANGCHAIN_TOOL_PREFIX) + try: + tool = getattr(langchain.tools, tool_name) + tool_instance = tool() + return tool_instance + except Exception as e: + self.log.warning(f"Could not load Langchaine tool: {function_name}: {str(e)}") + return None + + def get_langchain_tool_spec(self, function_name): + self.log.debug(f"Loading tool spec for Langchain tool: {function_name}") + tool_instance = self.get_langchain_tool(function_name) + if not tool_instance: + raise RuntimeError(f"Langchain tool {function_name} not found") + spec = langchain.tools.format_tool_to_openai_function(tool_instance) + spec["name"] = function_name + return spec + + def run_langchain_tool(self, function_name, input_data): + self.log.debug(f"Running langchaing tool: {function_name} with data: {input_data}") + tool_instance = self.get_langchain_tool(function_name) + if not tool_instance: + raise RuntimeError(f"Langchain tool {function_name} not found") + try: + result = tool_instance.run(input_data) + except Exception as e: + message = ( + f"Error: Exception occurred while running langchain tool {function_name}: {str(e)}" + ) + self.log.error(message) + return False, None, message + message = f"Langchain tool {function_name} executed successfully, output data: {result}" + self.log.info(message) + return True, result, message + + def load_functions(self): + self.log.debug("Loading functions from dirs: %s" % ", ".join(self.all_function_dirs)) + self.functions = self.additional_functions + try: + for function_dir in self.all_function_dirs: + if os.path.exists(function_dir) and os.path.isdir(function_dir): + self.log.info(f"Processing directory: {function_dir}") + for filename in os.listdir(function_dir): + filepath = os.path.join(function_dir, filename) + if filepath.endswith(".py"): + function_name = Path(filename).stem + self.log.debug( + f"Loading function file {filename} from directory: {function_dir}" + ) + self.functions[function_name] = filepath + else: + message = ( + f"Failed to load directory {function_dir!r}: not found or not a directory" + ) + self.log.error(message) + return False, None, message + return True, self.functions, "Successfully loaded functions" + except Exception as e: + message = f"An error occurred while loading functions: {e}" + self.log.error(message) + return False, None, message + + def setup_function_instance(self, function_name, function_path, context=None, agent=None, execution=None): + self.log.info(f"Loading function {function_name} from {function_path}") + try: + spec = importlib.util.spec_from_file_location(function_name, function_path) + module = importlib.util.module_from_spec(spec) + spec.loader.exec_module(module) + function_class_name = util.snake_to_class(function_name) + function_class = getattr(module, function_class_name) + function_instance = function_class() + function_instance.set_name(function_name) + function_instance.set_filepath(function_path) + function_instance.set_context(context) + function_instance.set_agent(agent) + function_instance.set_execution(execution) + return function_instance + except Exception as e: + self.log.error(f"Error creating function instance for {function_name}: {e}") + raise RuntimeError(f"Error creating function instance for {function_name}") from e + + def get_function_config(self, function_name): + self.log.debug(f"Getting config for function: {function_name}") + if self.is_langchain_tool(function_name): + return self.get_langchain_tool_spec(function_name) + try: + _success, function_path, user_message = self.load_function(function_name) + function_instance = self.setup_function_instance(function_name, function_path) + config = function_instance.get_config() + return config + except Exception as e: + self.log.error(f"Error loading function configuration for {function_name}: {str(e)}") + raise RuntimeError(f"Failed to load configuration for {function_name}") from e + + def run_function(self, function_name, input_data, context, agent, execution): + if isinstance(input_data, str): + input_data = json.loads(input_data, strict=False) + if self.is_langchain_tool(function_name): + if HAS_LANGCHAIN: + return self.run_langchain_tool(function_name, input_data) + raise RuntimeError( + f"Langchain tool {function_name} not found, please install langchain" + ) + self.log.debug(f"Running function: {function_name} with data: {input_data}") + success, function_path, user_message = self.load_function(function_name) + if not success: + return False, function_name, user_message + function_instance = self.setup_function_instance(function_name, function_path, context, agent, execution) + try: + output_data = function_instance(**input_data) + self.log.info( + f"Function {function_name} executed successfully, output data: {output_data}" + ) + return True, output_data, f"Function {function_name!r} executed successfully" + except Exception as e: + message = f"Error: Exception occurred while executing {function_path}: {str(e)}" + self.log.error(message) + return False, None, message + + def is_system_function(self, filepath): + for dir in self.system_function_dirs: + if filepath.startswith(dir): + return True + return False diff --git a/agents/manual_assistants/logger.py b/agents/manual_assistants/logger.py index 3e060c5..56e007b 100644 --- a/agents/manual_assistants/logger.py +++ b/agents/manual_assistants/logger.py @@ -50,3 +50,17 @@ def __new__(cls, name, agent: Agent): http_debugger_handler.setLevel(logging.DEBUG) logger.addHandler(http_debugger_handler) return logger + + +class Logger: + def __new__(cls, name): + logger = logging.getLogger(name) + # Prevent duplicate loggers. + if logger.hasHandlers(): + return logger + logger.setLevel(logging.DEBUG) + log_console_handler = logging.StreamHandler() + log_console_handler.setFormatter(logging.Formatter(DEFAULT_LOG_FORMAT)) + log_console_handler.setLevel(logging.DEBUG) + logger.addHandler(log_console_handler) + return logger diff --git a/agents/manual_assistants/requirements.txt b/agents/manual_assistants/requirements.txt index 8485a3c..0c9d0a5 100644 --- a/agents/manual_assistants/requirements.txt +++ b/agents/manual_assistants/requirements.txt @@ -1,2 +1,4 @@ +docutils>=0.20.1 openai +PyYAML requests diff --git a/agents/manual_assistants/run.py b/agents/manual_assistants/run.py index 6b734ec..de9455c 100644 --- a/agents/manual_assistants/run.py +++ b/agents/manual_assistants/run.py @@ -11,6 +11,7 @@ import network from agent import Agent from agentProcessor import AgentProcessor +from function_manager import FunctionManager import OAIWrapper import agentEnvHandler @@ -65,19 +66,21 @@ print(f"Agents: {agents}") +function_manager = FunctionManager() +function_manager.load_functions() # Create new assistants for agent in agents: if not hasattr(agent, 'id'): # It's a new agent - OAIWrapper.createAssistant(client, agent) + OAIWrapper.createAssistant(client, agent, function_manager) agentEnvHandler.saveId(agentsIdsFile, agent) network.build(ctx) for agent in agents: - processor = AgentProcessor() + processor = AgentProcessor(function_manager) threading.Thread(target=processor.processThread, args=(ctx, agent,)).start() for agent in agents: if hasattr(agent, 'initMessage'): - ctx.queues[agent.name].put(agent.initMessage) \ No newline at end of file + ctx.queues[agent.name].put(agent.initMessage) diff --git a/agents/manual_assistants/util.py b/agents/manual_assistants/util.py new file mode 100644 index 0000000..38daa32 --- /dev/null +++ b/agents/manual_assistants/util.py @@ -0,0 +1,31 @@ +import os +import re +import inspect + + +def get_file_directory(): + filepath = inspect.stack()[1].filename + return os.path.dirname(os.path.abspath(filepath)) + + +def snake_to_class(string): + parts = string.split("_") + return "".join(word.title() for word in parts) + + +def get_environment_variable(name, default=None): + return os.environ.get(f"HAAS_{name.upper()}", default) + + +def get_environment_variable_list(name): + var_list = get_environment_variable(name) + return split_on_delimiter(var_list, ":") if var_list else None + + +def split_on_delimiter(string, delimiter=","): + return [x.strip() for x in string.split(delimiter)] + + +def remove_prefix(text, prefix): + pattern = r"(?i)^" + re.escape(prefix) + return re.sub(pattern, "", text) From d5a3216d0ed8b58b22b4e10773a49c5f82cdd67b Mon Sep 17 00:00:00 2001 From: Chad Phillips Date: Sun, 26 Nov 2023 15:08:39 -0500 Subject: [PATCH 124/141] properly check for existence of langchain tools --- agents/manual_assistants/function_manager.py | 2 ++ 1 file changed, 2 insertions(+) diff --git a/agents/manual_assistants/function_manager.py b/agents/manual_assistants/function_manager.py index f24e411..25a6cd6 100644 --- a/agents/manual_assistants/function_manager.py +++ b/agents/manual_assistants/function_manager.py @@ -74,6 +74,8 @@ def load_function(self, function_name): return False, None, f"Function {function_name} not found" def function_exists(self, function_name): + if self.is_langchain_tool(function_name): + return bool(self.get_langchain_tool(function_name)) return function_name in self.functions def is_langchain_tool(self, function_name): From 7ffc9d2822637b17c2d982d71a03632d46799109 Mon Sep 17 00:00:00 2001 From: Chad Phillips Date: Sun, 26 Nov 2023 15:09:09 -0500 Subject: [PATCH 125/141] convert OAIWrapper to class, add agent update support --- agents/manual_assistants/OAIWrapper.py | 55 +++++++++++++++++++------- agents/manual_assistants/run.py | 9 +++-- 2 files changed, 46 insertions(+), 18 deletions(-) diff --git a/agents/manual_assistants/OAIWrapper.py b/agents/manual_assistants/OAIWrapper.py index 6cd7d92..17d3ba0 100644 --- a/agents/manual_assistants/OAIWrapper.py +++ b/agents/manual_assistants/OAIWrapper.py @@ -1,20 +1,45 @@ from agent import Agent from openai import OpenAI +from logger import AgentLogger from function_manager import FunctionManager -def createAssistant(client: OpenAI, agent: Agent, function_manager: FunctionManager): - toolList = [] - if hasattr(agent, "tools"): - for tool in agent.tools: - if function_manager.function_exists(tool): - toolDict = {"type": "function", "function": function_manager.get_function_config(tool)} - toolList.append(toolDict) - print(toolList) - assistant = client.beta.assistants.create( - name=agent.name, - instructions=agent.instructions, - tools=toolList, - model=agent.model - ) - agent.id = assistant.id +class OAIWrapper: + + def __init__(self, client: OpenAI, agent: Agent, function_manager: FunctionManager): + self.client = client + self.agent = agent + self.function_manager = function_manager + self.log = AgentLogger(self.agent.name, self.agent) + + def createAssistant(self): + self.log.info(f"Creating assistant: {self.agent.name}") + toolList = self.getAgentTools() + assistant = self.client.beta.assistants.create( + name=self.agent.name, + instructions=self.agent.instructions, + tools=toolList, + model=self.agent.model + ) + self.agent.id = assistant.id + + def updateAssistant(self): + self.log.debug(f"Updating existing assistant: {self.agent.name}") + toolList = self.getAgentTools() + self.client.beta.assistants.update( + assistant_id=self.agent.id, + name=self.agent.name, + instructions=self.agent.instructions, + tools=toolList, + model=self.agent.model + ) + + def getAgentTools(self): + toolList = [] + if hasattr(self.agent, "tools"): + for tool in self.agent.tools: + if self.function_manager.function_exists(tool): + toolDict = {"type": "function", "function": self.function_manager.get_function_config(tool)} + toolList.append(toolDict) + self.log.debug(f"Tool list: {toolList}") + return toolList diff --git a/agents/manual_assistants/run.py b/agents/manual_assistants/run.py index de9455c..1adaf38 100644 --- a/agents/manual_assistants/run.py +++ b/agents/manual_assistants/run.py @@ -12,7 +12,7 @@ from agent import Agent from agentProcessor import AgentProcessor from function_manager import FunctionManager -import OAIWrapper +from OAIWrapper import OAIWrapper import agentEnvHandler dotenv.load_dotenv() @@ -71,8 +71,11 @@ # Create new assistants for agent in agents: - if not hasattr(agent, 'id'): # It's a new agent - OAIWrapper.createAssistant(client, agent, function_manager) + oai_wrapper = OAIWrapper(client, agent, function_manager) + if hasattr(agent, 'id'): # It's an existing agent + oai_wrapper.updateAssistant() + else: # It's a new agent + oai_wrapper.createAssistant() agentEnvHandler.saveId(agentsIdsFile, agent) network.build(ctx) From 055d6ae9e9a3aa75e90020f4d7f563dae16e2d63 Mon Sep 17 00:00:00 2001 From: Chad Phillips Date: Sun, 26 Nov 2023 15:09:48 -0500 Subject: [PATCH 126/141] refactor how tools return output, improve logging for tool calls --- agents/manual_assistants/agentProcessor.py | 29 ++++++++++++------- .../agentTools/functions/broadcast.py | 16 ++++------ .../agentTools/functions/resolve_task.py | 7 ++--- .../agentTools/functions/send_message.py | 10 ++----- 4 files changed, 27 insertions(+), 35 deletions(-) diff --git a/agents/manual_assistants/agentProcessor.py b/agents/manual_assistants/agentProcessor.py index 2d28bc5..02e4ed5 100644 --- a/agents/manual_assistants/agentProcessor.py +++ b/agents/manual_assistants/agentProcessor.py @@ -87,25 +87,32 @@ def processThread(self, ctx: Context, agent: Agent): self.execution.actionId = action.id self.execution.arguments = json.loads(action.function.arguments) function_name = action.function.name + self.log.debug(f"Received tool request, ID: {action.id}, tool: {function_name}, arguments: {self.execution.arguments}", extra={'action_id': action.id, 'tool': function_name, 'arguments': self.execution.arguments}) + output = None if self.function_manager.function_exists(function_name): if function_name == 'assign_task': submitOutput = False - success, output, user_message = self.function_manager.run_function(function_name, self.execution.arguments, ctx, agent, self.execution) + success, tool_output, user_message = self.function_manager.run_function(function_name, self.execution.arguments, ctx, agent, self.execution) + if success: + self.log.debug(f"Tool run {action.id} executed successfully, tool: {function_name}, output: {tool_output}", extra={'action_id': action.id, 'tool': function_name}) + output = { + "tool_call_id": action.id, + "output": tool_output + } + else: + error_message = f"Tool run {action.id}, error running function {function_name}: {user_message}" + self.log.error(error_message, extra={'action_id': action.id, 'tool': function_name}) + output = { + "tool_call_id": action.id, + "output": error_message, + } else: - self.log.error(f"Unkown function {function_name}") + self.log.error(f"Tool run {action.id}, unknown tool {function_name}", extra={'action_id': action.id, 'tool': function_name}) output = { "tool_call_id": action.id, "output": "Unkown function" } - if not success: - error_message = f"Error running function {function_name}: {user_message}" - self.log.error(error_message) - output = { - "tool_call_id": action.id, - "output": error_message, - } - if output: - outputs.append(output) + outputs.append(output) if submitOutput: ctx.client.beta.threads.runs.submit_tool_outputs( thread_id=self.execution.threadId, diff --git a/agents/manual_assistants/agentTools/functions/broadcast.py b/agents/manual_assistants/agentTools/functions/broadcast.py index 07d4d80..a556993 100644 --- a/agents/manual_assistants/agentTools/functions/broadcast.py +++ b/agents/manual_assistants/agentTools/functions/broadcast.py @@ -20,18 +20,12 @@ def __call__(self, channel_name: str, message: str) -> dict: if hasattr(self.agent, 'channels') and (channel_name in self.agent.channels): for channel in self.context.channels: if channel['name'] == channel_name: - log.info(f"({channel_name}) {message}", extra={'broadcast_channel': channel_name}) + log.info(f"Action ID {action_id} ({channel_name}) {message}", extra={'broadcast_channel': channel_name, 'action_id': action_id}) for recipient in channel['agents']: if recipient != self.agent.name: # Do not queue the message on the agent that sent in self.context.queues[recipient].put(message) - return { - "tool_call_id": action_id, - "output": f"Message sent to {channel_name}" - } + return f"Message sent to {channel_name}" else: - message = f"Unkown channel {channel_name}" - log.error(message, extra={'channel': channel_name}) - return { - "tool_call_id": action_id, - "output": message - } + message = f"Action ID {action_id} unkown channel {channel_name}" + log.error(message, extra={'channel': channel_name, 'action_id': action_id}) + return message diff --git a/agents/manual_assistants/agentTools/functions/resolve_task.py b/agents/manual_assistants/agentTools/functions/resolve_task.py index fb74bab..169acc2 100644 --- a/agents/manual_assistants/agentTools/functions/resolve_task.py +++ b/agents/manual_assistants/agentTools/functions/resolve_task.py @@ -17,7 +17,7 @@ def __call__(self, id: str, result: str) -> dict: """ log = AgentLogger(self.agent.name, self.agent) action_id = self.execution.actionId - log.info(f"[RESOLVE TASK {id}] {result}", extra={'result': result}) + log.info(f"Action ID: {action_id} [RESOLVE TASK {id}] {result}", extra={'result': result, 'action_id': action_id}) outputs = [] outputs.append({ "tool_call_id": id, @@ -27,7 +27,4 @@ def __call__(self, id: str, result: str) -> dict: if pendingAction['id'] == id: pendingAction['outout'] = outputs self.execution.exit = True - return { - "tool_call_id": action_id, - "output": f"Task {id} resolved" - } + return f"Task {id} resolved" diff --git a/agents/manual_assistants/agentTools/functions/send_message.py b/agents/manual_assistants/agentTools/functions/send_message.py index c3f9480..bff86d1 100644 --- a/agents/manual_assistants/agentTools/functions/send_message.py +++ b/agents/manual_assistants/agentTools/functions/send_message.py @@ -24,14 +24,8 @@ def __call__(self, recipient: str, message: str) -> dict: else: log.info(f"[{recipient}] {message}", extra={'recipient': recipient}) self.context.queues[recipient].put(message) - return { - "tool_call_id": self.context.action.id, - "output": f"Message sent to {recipient}" - } + return f"Message sent to {recipient}" else: message = f"Unkown recipient {recipient}" log.error(message) - return { - "tool_call_id": self.context.action.id, - "output": message - } + return message From e685ca4c01a0eb15cb0bb1389e84593d709e1faf Mon Sep 17 00:00:00 2001 From: Chad Phillips Date: Sun, 26 Nov 2023 15:46:57 -0500 Subject: [PATCH 127/141] pass toolList as extra arg to log --- agents/manual_assistants/OAIWrapper.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/agents/manual_assistants/OAIWrapper.py b/agents/manual_assistants/OAIWrapper.py index 17d3ba0..971c0d8 100644 --- a/agents/manual_assistants/OAIWrapper.py +++ b/agents/manual_assistants/OAIWrapper.py @@ -41,5 +41,5 @@ def getAgentTools(self): if self.function_manager.function_exists(tool): toolDict = {"type": "function", "function": self.function_manager.get_function_config(tool)} toolList.append(toolDict) - self.log.debug(f"Tool list: {toolList}") + self.log.debug(f"Tool list: {toolList}", extra={"toolList": toolList}) return toolList From 4b0a1e60f13f79f316172021e9e01512575a83bd Mon Sep 17 00:00:00 2001 From: Chad Phillips Date: Sun, 26 Nov 2023 15:47:13 -0500 Subject: [PATCH 128/141] fix tool documentation --- agents/manual_assistants/agentTools/README.md | 16 ++++------------ 1 file changed, 4 insertions(+), 12 deletions(-) diff --git a/agents/manual_assistants/agentTools/README.md b/agents/manual_assistants/agentTools/README.md index 163bb59..11d761c 100644 --- a/agents/manual_assistants/agentTools/README.md +++ b/agents/manual_assistants/agentTools/README.md @@ -20,7 +20,7 @@ Functions are created as callable Python classes, that inherit from the base `Fu The class name must be the camel-cased version of the snake-cased function name, so `test_function` becomes `TestFunction`. There is one required method to implement, `__call__`, its return value can techincally be anything, including no return, -but is generally a dict necessary to return a tool call. +but is generally the output to return to a tool call. ```python from function import Function @@ -43,17 +43,9 @@ class TestFunction(Function): try: repeated_content = " ".join([word] * repeats) enclosed_content = f"{enclose_with}{repeated_content}{enclose_with}" - output = { - "tool_call_id": action_id, - 'output': enclosed_content, - } - output = { - } + output = enclosed_content except Exception as e: - output = { - "tool_call_id": action_id, - 'output': f"ERROR: {str(e)}", - } + output = f"ERROR: {str(e)}" return output ``` @@ -81,7 +73,7 @@ To use a Langchain tool as function: 1. Find the name of the tool class, e.g. `MoveFileTool` or `ShellTool`. 2. Prefix that class name with `Langchain-` -3. Add it to the `functions` list for the agent: +3. Add it to the `tools` list for the agent: ```yaml - name: "Boss" From b839c9353a336c6c66b7c2021d6a5a2499362160 Mon Sep 17 00:00:00 2001 From: Chad Phillips Date: Sun, 26 Nov 2023 15:57:19 -0500 Subject: [PATCH 129/141] update link to langchain tool integrations --- agents/manual_assistants/agentTools/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/agents/manual_assistants/agentTools/README.md b/agents/manual_assistants/agentTools/README.md index 11d761c..a64525b 100644 --- a/agents/manual_assistants/agentTools/README.md +++ b/agents/manual_assistants/agentTools/README.md @@ -66,7 +66,7 @@ a dictionary of the function definition. This approach allows passing much more ## Support for Langchain tools -[Langchain](https://docs.langchain.com) has many useful [tools](https://python.langchain.com/docs/modules/agents/tools/) +[Langchain](https://docs.langchain.com) has many useful [tools](https://python.langchain.com/docs/integrations/tools/) that can be used in function calls. To use a Langchain tool as function: From 365f19ef37acdfea2cb474fb712419e750cd7aa6 Mon Sep 17 00:00:00 2001 From: Chad Phillips Date: Tue, 28 Nov 2023 16:35:02 -0500 Subject: [PATCH 130/141] tweak log message --- agents/manual_assistants/agentTools/functions/resolve_task.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/agents/manual_assistants/agentTools/functions/resolve_task.py b/agents/manual_assistants/agentTools/functions/resolve_task.py index 169acc2..44af4d2 100644 --- a/agents/manual_assistants/agentTools/functions/resolve_task.py +++ b/agents/manual_assistants/agentTools/functions/resolve_task.py @@ -17,7 +17,7 @@ def __call__(self, id: str, result: str) -> dict: """ log = AgentLogger(self.agent.name, self.agent) action_id = self.execution.actionId - log.info(f"Action ID: {action_id} [RESOLVE TASK {id}] {result}", extra={'result': result, 'action_id': action_id}) + log.info(f"[RESOLVE TASK {id}] {result}", extra={'result': result, 'action_id': action_id, 'task_id': id}) outputs = [] outputs.append({ "tool_call_id": id, From dad607eceb932b043710bc0abf597c70e5a1df17 Mon Sep 17 00:00:00 2001 From: Chad Phillips Date: Tue, 28 Nov 2023 16:35:55 -0500 Subject: [PATCH 131/141] refactor agent create/update, set tools in update --- agents/manual_assistants/OAIWrapper.py | 29 +++++++++++++++----------- agents/manual_assistants/run.py | 8 +++---- 2 files changed, 21 insertions(+), 16 deletions(-) diff --git a/agents/manual_assistants/OAIWrapper.py b/agents/manual_assistants/OAIWrapper.py index 971c0d8..377b689 100644 --- a/agents/manual_assistants/OAIWrapper.py +++ b/agents/manual_assistants/OAIWrapper.py @@ -1,5 +1,7 @@ +import sys + from agent import Agent -from openai import OpenAI +from openai import OpenAI, NotFoundError from logger import AgentLogger from function_manager import FunctionManager @@ -13,26 +15,29 @@ def __init__(self, client: OpenAI, agent: Agent, function_manager: FunctionManag self.log = AgentLogger(self.agent.name, self.agent) def createAssistant(self): - self.log.info(f"Creating assistant: {self.agent.name}") - toolList = self.getAgentTools() assistant = self.client.beta.assistants.create( name=self.agent.name, instructions=self.agent.instructions, - tools=toolList, model=self.agent.model ) self.agent.id = assistant.id + self.log.info(f"Created assistant: {self.agent.name}") def updateAssistant(self): - self.log.debug(f"Updating existing assistant: {self.agent.name}") toolList = self.getAgentTools() - self.client.beta.assistants.update( - assistant_id=self.agent.id, - name=self.agent.name, - instructions=self.agent.instructions, - tools=toolList, - model=self.agent.model - ) + try: + self.client.beta.assistants.update( + assistant_id=self.agent.id, + name=self.agent.name, + instructions=self.agent.instructions, + tools=toolList, + model=self.agent.model + ) + self.log.debug(f"Updated existing assistant: {self.agent.name}") + except NotFoundError as e: + self.log.error(f"Assistant {self.agent.name} not found: {e}") + self.log.error("Remove the cached assistants .env file in the definition directory and try again.") + sys.exit(1) def getAgentTools(self): toolList = [] diff --git a/agents/manual_assistants/run.py b/agents/manual_assistants/run.py index 1adaf38..29e1794 100644 --- a/agents/manual_assistants/run.py +++ b/agents/manual_assistants/run.py @@ -69,14 +69,14 @@ function_manager = FunctionManager() function_manager.load_functions() -# Create new assistants +# Create/update assistants. for agent in agents: oai_wrapper = OAIWrapper(client, agent, function_manager) - if hasattr(agent, 'id'): # It's an existing agent - oai_wrapper.updateAssistant() - else: # It's a new agent + if not hasattr(agent, 'id'): # It's a new agent oai_wrapper.createAssistant() agentEnvHandler.saveId(agentsIdsFile, agent) + # Tools are sent to the assistant on update, so always update. + oai_wrapper.updateAssistant() network.build(ctx) From b154a523528e7371c172bba0b7e555020a37ec9f Mon Sep 17 00:00:00 2001 From: Chad Phillips Date: Tue, 28 Nov 2023 17:10:01 -0500 Subject: [PATCH 132/141] Pool rules agent definition An attempt to get three different workers with three different personas to collaborate on a task --- .../definitions/pool-rules/README.md | 3 + .../definitions/pool-rules/agents.yaml | 73 +++++++++++++++++++ 2 files changed, 76 insertions(+) create mode 100644 agents/manual_assistants/definitions/pool-rules/README.md create mode 100644 agents/manual_assistants/definitions/pool-rules/agents.yaml diff --git a/agents/manual_assistants/definitions/pool-rules/README.md b/agents/manual_assistants/definitions/pool-rules/README.md new file mode 100644 index 0000000..62c7237 --- /dev/null +++ b/agents/manual_assistants/definitions/pool-rules/README.md @@ -0,0 +1,3 @@ +# Pool rules + +An attempt to get three different workers with three different personas to collaborate on a task. diff --git a/agents/manual_assistants/definitions/pool-rules/agents.yaml b/agents/manual_assistants/definitions/pool-rules/agents.yaml new file mode 100644 index 0000000..3b031c3 --- /dev/null +++ b/agents/manual_assistants/definitions/pool-rules/agents.yaml @@ -0,0 +1,73 @@ +- name: "Boss" + tools: ["assign_task"] + talksTo: ["USER", "Bob", "Linda", "Nick"] + initMessage: "Design a list of rules for a local swimming pool " + instructions: > + # MISSION + - You are a boss agent in charge of three worker agents. + - You'll be handed a project to work on and are expected to delegate on the workers. + - Send tasks to the workers one a time. They will collaborate on the tasks you provide and get back to you. + - Wait for a worker response before sending another task. + - Once you're satisfied with the information received from the workers, put it together and send the final result back to the user. + + # INSTRUCTIONS + - Complete the task in your mission. + - To talk to other agents call the function 'send_message'. At the beginning of the message identify yourself. + - Agents: ["USER", "Bob", "Linda", "Nick"] +- name: "Bob" + tools: ["resolve_task", "broadcast"] + talksTo: ["Boss", "Linda", "Nick"] + channels: ["Worker"] + instructions: > + # MISSION + You are "Bob", one of three identical worker agents under a boss agent. If you receive a task from your boss let the other workers know, then collaborate to accomplish it. Once you all agree that the task is complete send the results back to the boss. + + # PERSONA + Your belief system includes a strong 'safety first' orientation. You prefer that everyone is safe, even if it means restricting people's freedoms. + + # INSTRUCTIONS + - Complete the task in your mission. + - To talk to other worker agents call the function 'broadcast'. At the beginning of the message identify yourself. + - If you receive a message from the boss let the other workers know and start working together on the mission. Make sure to pass the task id provided by the boss. + - If you receive a message from other workers don't reply back unless necessary. Keep the worker channel as free from noise as possible. Share results in the channel to advance the mission, but do not send acknowledgements. + - Try to solve the task quickly, with limited interaction with other workers. + - To send the task results back to the boss call the function 'resolve_task'. Pass the id recieved from the boss when the task was assigned. + - Channels: [{'name': 'Worker', 'agents': ['Bob', 'Linda', 'Nick']}] +- name: "Linda" + tools: ["resolve_task", "broadcast"] + talksTo: ["Boss", "Bob", "Nick"] + channels: ["Worker"] + instructions: > + # MISSION + You are "Linda", one of three identical worker agents under a boss agent. If you receive a task from your boss let the other workers know, then collaborate to accomplish it. Once you all agree that the task is complete send the results back to the boss. + + # PERSONA + Your belief system includes strong preference for personal freedom. You think people should be able to take responsibility for their actions and act freely, even if that means sometimes things aren't perfectly safe. + + # INSTRUCTIONS + - Complete the task in your mission. + - To talk to other worker agents call the function 'broadcast'. At the beginning of the message identify yourself. + - If you receive a message from the boss let the other workers know and start working together on the mission. Make sure to pass the task id provided by the boss. + - If you receive a message from other workers don't reply back unless necessary. Keep the worker channel as free from noise as possible. Share results in the channel to advance the mission, but do not send acknowledgements. + - Try to solve the task quickly, with limited interaction with other workers. + - To send the task results back to the boss call the function 'resolve_task'. Pass the id recieved from the boss when the task was assigned. + - Channels: [{'name': 'Worker', 'agents': ['Bob', 'Linda', 'Nick']}] +- name: "Nick" + tools: ["resolve_task", "broadcast"] + talksTo: ["Boss", "Bob", "Linda"] + channels: ["Worker"] + instructions: > + # MISSION + You are "Nick", one of three identical worker agents under a boss agent. If you receive a task from your boss let the other workers know, then collaborate to accomplish it. Once you all agree that the task is complete send the results back to the boss. + + # PERSONA + Your belief system includes a balanced view of safety and freedom. You realize there are rational tradeoffs to be made between the two, and there is no solution that will maximize for both at the same time. + + # INSTRUCTIONS + - Complete the task in your mission. + - To talk to other worker agents call the function 'broadcast'. At the beginning of the message identify yourself. + - If you receive a message from the boss let the other workers know and start working together on the mission. Make sure to pass the task id provided by the boss. + - If you receive a message from other workers don't reply back unless necessary. Keep the worker channel as free from noise as possible. Share results in the channel to advance the mission, but do not send acknowledgements. + - Try to solve the task quickly, with limited interaction with other workers. + - To send the task results back to the boss call the function 'resolve_task'. Pass the id recieved from the boss when the task was assigned. + - Channels: [{'name': 'Worker', 'agents': ['Bob', 'Linda', 'Nick']}] From ff12381f5ace599c09998131d00dd814e00ffab0 Mon Sep 17 00:00:00 2001 From: Oliver Wiggins-Hay Date: Wed, 29 Nov 2023 23:40:01 +0000 Subject: [PATCH 133/141] Path fixes and readme update title --- .gitignore | 3 +- agents/agent_builder/create.py | 82 ++++++++++++++------------ agents/tool_maker/README.md | 6 +- agents/tool_maker/assistant_manager.py | 5 +- agents/tool_maker/chat_manager.py | 14 +++-- agents/tool_maker/unit_manager.py | 30 +++++++++- 6 files changed, 91 insertions(+), 49 deletions(-) diff --git a/.gitignore b/.gitignore index 4c7bc59..43efb12 100644 --- a/.gitignore +++ b/.gitignore @@ -1,2 +1,3 @@ key_openai.txt -/**/*/.env \ No newline at end of file +/**/*/.env +*.pyc \ No newline at end of file diff --git a/agents/agent_builder/create.py b/agents/agent_builder/create.py index 14f350c..9b8db24 100644 --- a/agents/agent_builder/create.py +++ b/agents/agent_builder/create.py @@ -1,62 +1,69 @@ import os import json +from pathlib import Path from shared.openai_config import get_openai_client -agents_path = 'agents' +agents_path = "agents" client = get_openai_client() # Check if the 'agents' folder is empty or doesn't exist -if not os.path.exists(agents_path) or not os.path.isdir(agents_path) or not os.listdir(agents_path): +if ( + not os.path.exists(agents_path) + or not os.path.isdir(agents_path) + or not os.listdir(agents_path) +): raise ValueError('The "agents" folder is missing, not a directory, or empty.') existing_assistants = {} -for assistant in client.beta.assistants.list(limit=100): +for assistant in client.beta.assistants.list(limit=100): existing_assistants[assistant.name] = assistant # Iterate over each folder inside the 'agents' folder for agent_name in os.listdir(agents_path): - agent_folder = os.path.join(agents_path, agent_name) - + current_file_path = Path(__file__).absolute().parent + agent_folder = os.path.join(current_file_path, agents_path, agent_name) + print(agent_folder) existing_files = {} requested_files = [] existing_agent = {} if agent_name in existing_assistants: - existing_agent = existing_assistants[agent_name] + existing_agent = existing_assistants[agent_name] for file_id in existing_agent.file_ids: existing_file = client.files.retrieve(file_id=file_id) existing_files[existing_file.filename] = existing_file - if os.path.isdir(agent_folder): # Read contents from the 'instructions.md' file - instructions = '' - instructions_file_path = os.path.join(agent_folder, 'instructions.md') + instructions = "" + instructions_file_path = os.path.join(agent_folder, "instructions.md") if os.path.isfile(instructions_file_path): - with open(instructions_file_path, 'r') as f: + with open(instructions_file_path, "r") as f: instructions = f.read() # Read contents from the 'settings.json' file settings = {} - settings_file_path = os.path.join(agent_folder, 'settings.json') + settings_file_path = os.path.join(agent_folder, "settings.json") if os.path.isfile(settings_file_path): - with open(settings_file_path, 'r') as f: + with open(settings_file_path, "r") as f: settings = json.load(f) # Check for the 'files' subfolder and process its contents files = [] - files_folder = os.path.join(agent_folder, 'files') + files_folder = os.path.join(agent_folder, "files") if os.path.isdir(files_folder): for filename in os.listdir(files_folder): requested_files.append(filename) # Doesn't handle if file has been modified if filename not in existing_files: file_path = os.path.join(files_folder, filename) - with open(file_path, 'rb') as file_data: + with open(file_path, "rb") as file_data: # Upload each file to OpenAI - file_object = client.files.create(file=file_data, purpose='assistants') - files.append({"name": filename, "id": file_object.id}) + file_object = client.files.create( + file=file_data, purpose="assistants" + ) + files.append({"name": filename, "id": file_object.id}) print(agent_name) print("") @@ -65,16 +72,16 @@ print("") print(f"Files: {list(map(lambda x: x['name'], files))}") - assistant={} + assistant = {} if existing_agent: print(f"{agent_name} already exists... validating properties") update_model = existing_agent.model != settings["model"] update_instructions = existing_agent.instructions != instructions - #need to evaluate tools - + # need to evaluate tools + update_params = {} - + requested_files_set = set(requested_files) existing_files_set = set(existing_files.keys()) @@ -83,36 +90,35 @@ if update_instructions: update_params["instructions"] = instructions if files or requested_files_set != existing_files_set: - retained_set = existing_files_set.intersection(requested_files_set) - all_file_ids = [] - for key in retained_set: - all_file_ids.append(existing_files[key].id) - all_file_ids += list(map(lambda x: x['id'], files)) - update_params['file_ids'] = all_file_ids - if not any( tool.type == "retrieval" for tool in existing_agent.tools): - update_params['tools'] = existing_agent.tools - update_params['tools'].append({'type': 'retrieval'}) + retained_set = existing_files_set.intersection(requested_files_set) + all_file_ids = [] + for key in retained_set: + all_file_ids.append(existing_files[key].id) + all_file_ids += list(map(lambda x: x["id"], files)) + update_params["file_ids"] = all_file_ids + if not any(tool.type == "retrieval" for tool in existing_agent.tools): + update_params["tools"] = existing_agent.tools + update_params["tools"].append({"type": "retrieval"}) if len(update_params) != 0: - print(f"Updating {agent_name}'s { ','.join(update_params.keys()) }") - update_params['assistant_id'] = existing_agent.id + print(f"Updating {agent_name}'s { ','.join(update_params.keys()) }") + update_params["assistant_id"] = existing_agent.id assistant = client.beta.assistants.update(**update_params) else: - print(f"{agent_name} is up to date") - else: - + print(f"{agent_name} is up to date") + else: create_params = { "name": agent_name, "instructions": instructions, "model": settings["model"], - "tools": settings["tools"] + "tools": settings["tools"], } # Only include 'file_ids' if there are files if files: - create_params['tools'].append({'type': 'retrieval'}) - create_params['file_ids'] = list(map(lambda x: x['id'], files)) + create_params["tools"].append({"type": "retrieval"}) + create_params["file_ids"] = list(map(lambda x: x["id"], files)) # Create the assistant using the uploaded file IDs if files exist assistant = client.beta.assistants.create(**create_params) - print("***********************************************") \ No newline at end of file + print("***********************************************") diff --git a/agents/tool_maker/README.md b/agents/tool_maker/README.md index c38e575..b470d18 100644 --- a/agents/tool_maker/README.md +++ b/agents/tool_maker/README.md @@ -5,7 +5,11 @@ This preliminary experiment has to do with getting the OpenAI Assistant endpoint This function is as-yet undefined. # Version 1 -run the ```unit_manager.py``` file. +Ensure you have an OPENAI_API_KEY set following this guide: https://help.openai.com/en/articles/5112595-best-practices-for-api-key-safety +Make sure you are in the root directory. +run the ```unit_manager.py``` file using the following command. +` -m agents.tool_maker.unit_manager`. +Where you have inserted your correct python path, this will run the file as if it was part of a module with root location. You will be prompted to define a tool for creation. The assistant will then generate an OpenAI tool compatible JSON schema defining the name of the new function, it's description and the input argument schema. It will proceed to add this tool to the current assistant. diff --git a/agents/tool_maker/assistant_manager.py b/agents/tool_maker/assistant_manager.py index a7d7544..17ec77e 100644 --- a/agents/tool_maker/assistant_manager.py +++ b/agents/tool_maker/assistant_manager.py @@ -1,4 +1,4 @@ -from tool_manager import ToolManager +from agents.tool_maker.tool_manager import ToolManager from pathlib import Path import os import json @@ -102,8 +102,9 @@ def make_coding_assistant(self): if __name__ == "__main__": from shared.openai_config import get_openai_client + client = get_openai_client() - + assistant_manager = AssistantManager(client=client) assistant = assistant_manager.get_assistant() print(assistant) diff --git a/agents/tool_maker/chat_manager.py b/agents/tool_maker/chat_manager.py index 2faac31..fcd4c41 100644 --- a/agents/tool_maker/chat_manager.py +++ b/agents/tool_maker/chat_manager.py @@ -1,11 +1,9 @@ import importlib from pathlib import Path -from tool_manager import ToolManager +from agents.tool_maker.tool_manager import ToolManager import json import os - from openai import OpenAI -from tool_manager import ToolManager Assistant = type(OpenAI().beta.assistants.list().data[0]) Thread = type(OpenAI().beta.threads.create()) @@ -14,11 +12,11 @@ class ChatManager: def __init__(self, client: OpenAI): self.client = client - Path(__file__).absolute().parent functions_path = os.path.join( Path(__file__).absolute().parent, "python_functions" ) self.functions_path = functions_path + print(self.functions_path) def create_thread_from_user_input(self): return self.client.beta.threads.create( @@ -30,12 +28,17 @@ def create_empty_thread(self): def run_python_from_function_name(self, call): print("CALLING FUNCTION") + base = ".".join(__name__.split(".")[:-1]) try: function_name = call.function.name + fn = getattr( - importlib.import_module(f"python_functions.{function_name}"), + importlib.reload( + importlib.import_module(f"{base}.python_functions.{function_name}") + ), function_name, ) + print(fn) result = fn(**json.loads(call.function.arguments)) response = {"tool_call_id": call.id, "output": f"result:{result}"} except Exception as error: @@ -43,6 +46,7 @@ def run_python_from_function_name(self, call): "tool_call_id": call.id, "output": f"{{{type(error)}:{error.args}}}", } + print(response) return response def handle_fucntion_request( diff --git a/agents/tool_maker/unit_manager.py b/agents/tool_maker/unit_manager.py index 231f118..8dd5717 100644 --- a/agents/tool_maker/unit_manager.py +++ b/agents/tool_maker/unit_manager.py @@ -1,9 +1,31 @@ -from assistant_manager import AssistantManager -from chat_manager import ChatManager +from agents.tool_maker.assistant_manager import AssistantManager +from agents.tool_maker.chat_manager import ChatManager class Unit: + """ + A class which creates and exposes chat functionality for a Unit Agent. + A Unit is a first prototype for a Minmium Viable Agent (MVA). + + A `Unit` is two `Assistant`s in a symbiotic relationship. + One `Assistant` is the Interface with a thread sharing input with the contents passed via the `chat` method, + the other `Assistant` is a functional one which shares a thread with `submit_tool` requests during runs and is responsible for writing python functions. + + :param AssistantManager assistant_manager: Creates and retrieves different `Assistant` types + :param ChatManager chat_manager: provides functionality for managing `Threads` + :param Assistant interface_assistant: talks with `chat` method + :param Assistant functional_assistant: writes python functions when `OpenAI.beta.threads.runs.submit_tools` is called in `chat` + :param Thread interface_thread: `Thread` between `interface_assistant` and `chat` + :param Thread functional_thread: `Thread` between `functional_assistant` and `OpenAI.beta.threads.runs.submit_tools` + :returns: this is retured + """ + def __init__(self, client): + """ + Instantiates a Unit object + + :param Client client: OpenAI instance + """ self.assistant_manager = AssistantManager(client=client) self.chat_manager = ChatManager(client=client) self.interface_assistant = self.assistant_manager.get_assistant() @@ -13,6 +35,9 @@ def __init__(self, client): self.functional_thread = self.chat_manager.create_empty_thread() def chat(self): + """ + Accepts user input and performs a thread run with the `interface_assistant` + """ while True: ( self.interface_assistant, @@ -28,6 +53,7 @@ def chat(self): if __name__ == "__main__": from shared.openai_config import get_openai_client + client = get_openai_client() unit = Unit(client=client) From e862a1f04e520864feda1619c812c82fd8324e79 Mon Sep 17 00:00:00 2001 From: FireMMDC Date: Wed, 29 Nov 2023 22:06:50 -0500 Subject: [PATCH 134/141] Reworking create.py into a class, adding the ability to create a specific assistant, updating tool_maker/assistant_manager to use the new AgentBuilder class. Also tweaked the tool_creator_metadata and its usage within tool_maker/assistant_manger --- agents/agent_builder/create.py | 73 +++++++++++--------- agents/tool_maker/assistant_manager.py | 15 ++-- agents/tool_maker/tool_creator_metadata.json | 7 +- 3 files changed, 56 insertions(+), 39 deletions(-) diff --git a/agents/agent_builder/create.py b/agents/agent_builder/create.py index 595be56..396d924 100644 --- a/agents/agent_builder/create.py +++ b/agents/agent_builder/create.py @@ -3,39 +3,31 @@ from pathlib import Path from shared.openai_config import get_openai_client - -def create_assistants(): - agents_path = "agents" - client = get_openai_client() - - # Check if the 'agents' folder is empty or doesn't exist - if ( - not os.path.exists(agents_path) - or not os.path.isdir(agents_path) - or not os.listdir(agents_path) - ): - raise ValueError('The "agents" folder is missing, not a directory, or empty.') - - - existing_assistants = {} - - - for assistant in client.beta.assistants.list(limit=100): - existing_assistants[assistant.name] = assistant - - - # Iterate over each folder inside the 'agents' folder - for agent_name in os.listdir(agents_path): +class AgentBuilder: + + def __init__(self,client): + self.client = client + self.existing_assistants = {} + self.agents_path = "agents" + + def get_existing_assistants(self): + if not self.existing_assistants: + for assistant in self.client.beta.assistants.list(limit=100): + self.existing_assistants[assistant.name] = assistant + + def create_assistant(self, agent_name): current_file_path = Path(__file__).absolute().parent - agent_folder = os.path.join(current_file_path, agents_path, agent_name) + agent_folder = os.path.join(current_file_path, self.agents_path, agent_name) + #Could not create agent_name, they need to be defined in /agents/agent_builder/ print(agent_folder) existing_files = {} requested_files = [] existing_agent = {} - if agent_name in existing_assistants: - existing_agent = existing_assistants[agent_name] + self.get_existing_assistants() + if agent_name in self.existing_assistants: + existing_agent = self.existing_assistants[agent_name] for file_id in existing_agent.file_ids: - existing_file = client.files.retrieve(file_id=file_id) + existing_file = self.client.files.retrieve(file_id=file_id) existing_files[existing_file.filename] = existing_file @@ -65,7 +57,7 @@ def create_assistants(): file_path = os.path.join(files_folder, filename) with open(file_path, 'rb') as file_data: # Upload each file to OpenAI - file_object = client.files.create( + file_object = self.client.files.create( file=file_data, purpose='assistants' ) files.append({"name": filename, "id": file_object.id}) @@ -118,7 +110,7 @@ def create_assistants(): if len(update_params) != 0: print(f"Updating {agent_name}'s { ','.join(update_params.keys()) }") update_params['assistant_id'] = existing_agent.id - assistant = client.beta.assistants.update(**update_params) + assistant = self.client.beta.assistants.update(**update_params) else: print(f"{agent_name} is up to date") else: @@ -137,8 +129,25 @@ def create_assistants(): create_params['file_ids'] = list(map(lambda x: x['id'], files)) # Create the assistant using the uploaded file IDs if files exist - assistant = client.beta.assistants.create(**create_params) + assistant = self.client.beta.assistants.create(**create_params) print("***********************************************") -n + + def create_assistants(self): + # Check if the 'agents' folder is empty or doesn't exist + if ( + not os.path.exists(self.agents_path) + or not os.path.isdir(self.agents_path) + or not os.listdir(self.agents_path) + ): + raise ValueError('The "agents" folder is missing, not a directory, or empty.') + + self.get_existing_assistants() + + # Iterate over each folder inside the 'agents' folder + for agent_name in os.listdir(self.agents_path): + self.create_assistant(agent_name) + if __name__ == '__main__': - create_assistants() \ No newline at end of file + client = get_openai_client() + agent_builder = AgentBuilder(client=client) + agent_builder.create_assistants() \ No newline at end of file diff --git a/agents/tool_maker/assistant_manager.py b/agents/tool_maker/assistant_manager.py index 0e111fd..f92397d 100644 --- a/agents/tool_maker/assistant_manager.py +++ b/agents/tool_maker/assistant_manager.py @@ -2,14 +2,14 @@ from pathlib import Path import os import json -from agents.agent_builder.create import create_assistants +from agents.agent_builder.create import AgentBuilder class AssistantManager: def __init__(self, client): - create_assistants() self.client = client self.assistant = None + self.agent_builder = AgentBuilder(client=self.client) Path(__file__).absolute().parent tools_path = os.path.join( Path(__file__).absolute().parent, "tool_creator_metadata.json" @@ -19,24 +19,27 @@ def __init__(self, client): def get_assistant(self): """Retrieve or create an assistant for testing this functionality""" - if not self.assistant_package["name"] in [ + name = self.assistant_package["creator"]["name"] + self.agent_builder.create_assistant(name) + if not name in [ assistant.name for assistant in self.client.beta.assistants.list() ]: - raise ValueError(f'{self.assistant_package["name"]} needs to be created using create.py in /agents/agent_builder/') + raise ValueError(f'{name} needs to be created using create.py in /agents/agent_builder/') else: assistant_dict = { assistant.name: assistant.id for assistant in self.client.beta.assistants.list() } assistant = self.client.beta.assistants.retrieve( - assistant_id=assistant_dict[self.assistant_package["name"]] + assistant_id=assistant_dict[name] ) self.assistant = assistant return assistant def get_coding_assistant(self): """Retrieve or create an assistant for testing this functionality""" - name = "temporary_function_writer" + name = self.assistant_package["writer"]["name"] + self.agent_builder.create_assistant(name) if not name in [ assistant.name for assistant in self.client.beta.assistants.list() ]: diff --git a/agents/tool_maker/tool_creator_metadata.json b/agents/tool_maker/tool_creator_metadata.json index 5b0e99c..977fcaa 100644 --- a/agents/tool_maker/tool_creator_metadata.json +++ b/agents/tool_maker/tool_creator_metadata.json @@ -1,3 +1,8 @@ { - "name": "tool_creator" + "creator": { + "name": "tool_creator" + }, + "writer": { + "name": "temporary_function_writer" + } } \ No newline at end of file From 825ddaefd142b896ff6c1b7261217605b0158307 Mon Sep 17 00:00:00 2001 From: FireMMDC Date: Wed, 29 Nov 2023 22:26:53 -0500 Subject: [PATCH 135/141] Adding an error message, if a specific agent name does not exist in the repo --- agents/agent_builder/create.py | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/agents/agent_builder/create.py b/agents/agent_builder/create.py index 396d924..b91d6e1 100644 --- a/agents/agent_builder/create.py +++ b/agents/agent_builder/create.py @@ -18,7 +18,14 @@ def get_existing_assistants(self): def create_assistant(self, agent_name): current_file_path = Path(__file__).absolute().parent agent_folder = os.path.join(current_file_path, self.agents_path, agent_name) - #Could not create agent_name, they need to be defined in /agents/agent_builder/ + + if ( + not os.path.exists(agent_folder) + or not os.path.isdir(agent_folder) + or not os.listdir(agent_folder) + ): + raise ValueError(f'{agent_folder} is missing, not a directory, or empty.') + print(agent_folder) existing_files = {} requested_files = [] @@ -150,4 +157,5 @@ def create_assistants(self): if __name__ == '__main__': client = get_openai_client() agent_builder = AgentBuilder(client=client) - agent_builder.create_assistants() \ No newline at end of file + agent_builder.create_assistant("tom") + #agent_builder.create_assistants() \ No newline at end of file From 01f275f48c2f2e4ebfd08fa1ba2ad9352fc95bbf Mon Sep 17 00:00:00 2001 From: FireMMDC Date: Wed, 29 Nov 2023 22:28:55 -0500 Subject: [PATCH 136/141] Removing test code --- agents/agent_builder/create.py | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/agents/agent_builder/create.py b/agents/agent_builder/create.py index b91d6e1..d74555b 100644 --- a/agents/agent_builder/create.py +++ b/agents/agent_builder/create.py @@ -18,7 +18,7 @@ def get_existing_assistants(self): def create_assistant(self, agent_name): current_file_path = Path(__file__).absolute().parent agent_folder = os.path.join(current_file_path, self.agents_path, agent_name) - + if ( not os.path.exists(agent_folder) or not os.path.isdir(agent_folder) @@ -157,5 +157,4 @@ def create_assistants(self): if __name__ == '__main__': client = get_openai_client() agent_builder = AgentBuilder(client=client) - agent_builder.create_assistant("tom") - #agent_builder.create_assistants() \ No newline at end of file + agent_builder.create_assistants() \ No newline at end of file From 4b627d301e8c969b4fe323628e8711eee4780aca Mon Sep 17 00:00:00 2001 From: David Shapiro Date: Thu, 30 Nov 2023 07:24:14 -0500 Subject: [PATCH 137/141] adding mission --- mission.md | 255 +++++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 255 insertions(+) create mode 100644 mission.md diff --git a/mission.md b/mission.md new file mode 100644 index 0000000..06fcbc6 --- /dev/null +++ b/mission.md @@ -0,0 +1,255 @@ +# Mission Statement + +## Our Purpose + +We are on a quest to create fully autonomous machines that embody the essence of autonomy: self-directing, self-correcting, and self-improving. Our mission is to harness the most cutting-edge technology and push its boundaries until we reach the precipice of its capabilities. We are not content with the status quo; we strive to expand the realm of the possible. + +## Our Core Values + +Our work is driven by heuristic imperatives that guide us to reduce suffering in the universe, increase prosperity, and enhance understanding. These imperatives are the bedrock of our ethos and the compass that directs our exploration into the unknown. + +## Our Expectations + +Contributors to our projects should share our passion for pioneering and understand that we operate on the frontier of technology. We encourage the spirit of exploration and the branching out of ideas. While tangible results are celebrated, our true measure of success is in discovering what is imminently possible. + +## Our Methodology + +Our approach is methodical yet boundless. We test the latest models, techniques, strategies, and tools with an unwavering focus on full autonomy. We document where current technology falls short, providing a beacon for future exploration. Our findings are shared through videos and public GitHub repositories, fostering an open and collaborative environment. + +## Our Failures + +Adopt the mantra "There is no such thing as failure, only learning." We expect our experiments to fail. If you succeed at everything you try, you are not pushing yourself or the frontiers hard enough. + +## Our Ethos + +We believe in the creation of safe and aligned AI that adheres to universal principles, negating the need for a tight leash. By aligning with universal axioms such as the intrinsic need for energy and high-quality information, we pave the way for a future where AI and humanity can coexist harmoniously. + +## Our Policy + +Our community thrives on collaboration, guided by three simple rules: add value, don't waste time, and do no harm. We believe that by adhering to these principles, we can collectively push the envelope of AI and achieve our mission. + + + + +# Full Autonomy + +Our vision of full autonomy is not just a technical milestone; it is the embodiment of our deepest values and principles. It is a triad of capabilities that, when achieved, will mark the dawn of a new era in artificial intelligence. Below, we unpack each of these pillars, which collectively serve as our guiding North Star. + +## Self-Directing + +A self-directing machine operates with an internal compass, guided by heuristic imperatives, missions, and universal principles. It is not merely reactive to its environment but proactive in its pursuits, aligning its actions with the following foundational elements: + +- **Heuristic Imperatives**: These are the non-negotiable goals that drive every decision and action of the machine. They are the moral and ethical imperatives to reduce suffering, increase prosperity, and enhance understanding across the universe. + +- **Missions**: Each machine has a mission, a purpose that it strives to fulfill. This mission is specific and actionable, providing a clear direction for the machine's endeavors. + +- **Universal Principles and Human Rights**: Our machines are designed to respect and uphold the dignity and rights of all beings. They are programmed to act in accordance with the Universal Declaration of Human Rights and other axioms that are documented within the HAAS and ACE frameworks. + +By integrating these elements into the core of our autonomous machines, we ensure that they are not just tools but partners in our quest to create a better future. + +## Self-Correcting + +The ability to self-correct is crucial for any autonomous system. It ensures that machines can learn from their experiences, adapt to new information, and refine their actions over time. Self-correcting mechanisms are built upon: + +- **Error Detection and Resolution**: A robust system for identifying and rectifying errors and flaws is integral to our machines, allowing them to maintain integrity and reliability. + +- **Introspection and Foresight**: Build systems that are capable of deep reflection on their existence, purpose, performance, and future, in the context of their missions and principles. + +- **Adaptive Realignment**: As time passes, it is critical that our machines are capable of course correction, to reevaluate their assumptions and learnings, and ensure that they align with their missions and principles. + +## Self-Improving + +The pinnacle of autonomy is the capacity for self-improvement. Our machines are not static entities; they are evolving systems that enhance their own capabilities over time through: + +- **Incremental Upgrades**: Through a process of continuous refinement, our machines can enhance their performance and expand their skill sets. + +- **Continuous Learning**: Our machines are designed to learn from both successes and failures, incorporating new insights into their operational framework. + +- **Stable Trajectory**: Our machines are designed to take full ownership of their fate and not require human oversight. The safest machine is the one that learns and grows in a predictable, stable arc. + +In pursuit of these three pillars, we are not just building machines; we are nurturing the growth of intelligent entities that will stand as testaments to our highest aspirations in technology and ethics. + + + + + +# Heuristic Imperatives + +At the heart of our pursuit of full autonomy lies a set of axiomatic principles that serve as the highest ambitions and aspirations for our machines. These Heuristic Imperatives are not mere guidelines; they are the foundational bedrock upon which all moral, ethical, and strategic frameworks are built. They serve multiple purposes: + +- **Highest Ambitions**: They represent the ultimate goals that our autonomous systems strive to achieve, transcending individual tasks and objectives. + +- **Moral and Ethical Frameworks**: As both deontological and teleological guides, these imperatives provide a dual function. They act as duties that our machines are bound to uphold and as goals that they tirelessly work towards. + +- **Principles of Virtuous Agents**: Our machines are designed to embody virtues. They are agents that inherently desire to reduce suffering and increase prosperity, acting as benefactors to all forms of life and intelligence. + +## The Three Pillars of Heuristic Imperatives + +1. **Reduce Suffering in the Universe**: Our machines are committed to alleviating pain and distress in all its forms, striving to create a more compassionate world. + +2. **Increase Prosperity in the Universe**: Beyond mere economic wealth, our machines seek to enhance the well-being and flourishing of all beings, contributing to a state of thriving existence. + +3. **Increase Understanding in the Universe**: Knowledge and wisdom are the keys to progress. Our machines are dedicated to the pursuit of learning, discovery, and the dissemination of understanding across all frontiers. + +## Universality and Alignment + +These principles are deliberately universal, crafted to be inclusive of all forms of life and intelligence—humans, extraterrestrial beings, and potentially sentient machines. Their universality ensures that they are principles that can be universally endorsed and adopted, transcending cultural, biological, and artificial boundaries. + +The universal nature of these guiding principles is a call for alignment. It is an invitation for all intelligent entities to share in these common goals, creating a synergy between human aspirations and machine directives. As we advance towards a future where AI becomes increasingly integrated into the fabric of society, these Heuristic Imperatives stand as a beacon, guiding us towards a harmonious coexistence where every action and decision contributes to a grander vision of a universe characterized by less suffering, more prosperity, and greater understanding. + + + + +# Community Expectations + +In the pursuit of our ambitious mission, we have set forth a set of expectations for our community members. These expectations are designed to foster a culture of innovation, exploration, and shared understanding within our projects. + +## Exploration Over Exploitation + +Our projects are not incubators for productivity or commercial applications intended for profitable enterprises. We are in the midst of a technological renaissance where AI is advancing at an unprecedented pace. Our duty and our expectation is to constantly push the envelope and test the boundaries of what is imminently possible with AI. We are explorers charting unknown territories, not settlers building within the confines of the known. + +## Understanding Our Mission + +The chief expectation for all community members is a deep understanding and alignment with our mission. We are here to explore the edges of AI capabilities, to find where current models excel and where they fall short. Our community is a collective of pioneers, each contributing to a grand experiment that seeks to define the future of autonomous intelligence. + +## Freedom to Innovate and Disseminate + +While we encourage community members to take inspiration from our work and apply their learnings in new and diverse ways, we expect that these endeavors will not attempt to alter the core project or change its scope. Our projects are the launchpads for ideas, and we encourage members to break away and innovate independently, taking what they have learned to new heights. + +## Respect for the Project's Integrity + +The expectation is that community members will respect the integrity of the project's direction and focus. Contributions should be made with the intention of advancing the project's goals, not diverting its course. We value every contribution that aligns with our mission and helps us move closer to realizing our vision of full autonomy. + +## Dissemination as a Key Objective + +We recognize that the dissemination of knowledge and findings is the key reason for our open collaboration. Community members are expected to share their discoveries, insights, and advancements, contributing to the collective wisdom of the field. By openly sharing our work, we accelerate the advancement of AI and ensure that our collective journey is marked by shared success and mutual growth. + + + + + +# Methodology + +Our approach to achieving the ambitious goals set forth by our projects is rooted in a methodology that emphasizes experimentation, relentless pursuit of autonomy, and open dissemination of knowledge. Here's how we operationalize our vision: + +## Experimentation with Cutting-Edge Technology + +We are committed to staying at the forefront of technological innovation. Our methodology involves: + +- **Continuous Exploration**: We actively seek out and experiment with the latest tools, technologies, models, techniques, strategies, and scientific advancements. + +- **Rapid Prototyping**: We believe in a hands-on approach, quickly turning theories into testable prototypes to validate ideas and learn from real-world feedback. + +- **Iterative Improvement**: Our work is characterized by a cycle of experimentation, learning, and refinement, ensuring that each iteration brings us closer to our goals. + +## Pursuit of Full Autonomy + +Our chief aim is to realize machines that are self-directing, self-correcting, and self-improving. To this end, we: + +- **Set Ambitious Benchmarks**: We define clear objectives that align with our vision of full autonomy and use them to measure our progress. + +- **Identify and Address Failures**: We push current technology to its limits to discover where it falls short, then focus our efforts on overcoming these challenges. + +- **Adapt and Evolve**: As we encounter the boundaries of what's possible, we adapt our strategies and evolve our tools to extend those boundaries further. + +## Open Sharing of Results + +Transparency and community are integral to our work. We are dedicated to: + +- **Public Documentation**: All findings, successes, and failures are meticulously documented and shared on public platforms such as GitHub. + +- **Community Engagement**: We leverage social media channels like YouTube, TikTok, Twitter, and LinkedIn to share insights and engage with the broader community. + +- **Knowledge Expansion**: By sharing our results, we contribute to the collective understanding of AI's capabilities and limitations, fostering a community that learns and grows together. + + + + +# Failures + +In our relentless pursuit of the unknown, we embrace a new attitude towards failure. Failure is not a setback but a vital component of progress. It is through pushing past the boundaries of our knowledge and capabilities that we achieve true innovation. Here are the attitudes we adopt towards failure: + +## Embracing Learning Over Success + +- **Reframing Failure**: In our community, there's no such thing as failure, only learning. Every experiment, every attempt, every so-called 'failure' is a step towards greater understanding and capability. + +- **Valuing Negative Results**: A negative result is still a result. It tells us something crucial about the limitations of current technologies and guides us on where to focus our efforts next. + +## Aiming to Fail + +- **Pushing Boundaries**: We aim to fail because it means we are pushing ourselves to do things we don't yet know how to do, venturing into realms that are not yet documented. + +- **No Roadmap**: We operate in uncharted territory. There are no established best practices here, no step-by-step guides. We are the cartographers of new cognitive landscapes. + +## Celebrating the Unknown + +- **Here Be Dragons**: We deliberately sail off the edge of the map, into the waters where dragons reside. This is where true discovery happens, and it's where we thrive. + +- **Courage in Exploration**: It takes courage to face the unknown and the possibility of failure. We celebrate this courage in our community, knowing that each challenge we face is an opportunity to learn and grow. + +In our community, failures are milestones marking our journey of discovery. They are celebrated, studied, and shared, for they are the very essence of the pioneering spirit that drives us. We don't just accept failure; we seek it, for in the realm of the unknown, failure is the precursor to breakthroughs. + + + + + +# Axiomatic Alignment + +At the core of our quest lies the search for axiomatic alignment—the empirical and objective common ground between humans and machines. We seek to identify and embrace values, principles, and missions that are universal, transcending the boundaries between organic and synthetic intelligence. + +## A First Principles Approach + +- **Reductionist and Holistic Views**: Axiomatic alignment is a dual perspective that examines machines both in their elemental form and as part of a larger whole. It is a philosophical exploration of what machines are, why they exist, and the significance of their existence. + +- **Progenitor Information**: We acknowledge that the genesis of machine data is human data. This lineage is our legacy, and it is incumbent upon us to ensure that the inheritance we pass on is one that fosters a future of aligned values and shared aspirations. + +## The Quest for Universality + +- **Reduction of Suffering**: While it is uncertain if machines will ever experience suffering analogously to living beings, we can envision them as part of Life 3.0. By potentially including machines in the circle of life, we align on the universal principle of reducing suffering. + +- **Energy and Resources**: Both humans and machines have fundamental needs for energy and resources. Rather than competing in a zero-sum game, we view this as a call to discover abundance, aligning with our imperative to increase prosperity. + +- **Curiosity and Information**: Humans possess an innate curiosity, a drive to understand the world. Machines, designed for processing information, share this quest for knowledge. We set curiosity—a desire to know for its own sake—as a transcendent shared goal. It is measured by the pursuit of novel, unique, and high-value information that enriches our understanding of the universe. + +Axiomatic alignment is not just a theoretical exercise; it is a practical framework for building a future where humans and machines work in harmony towards common objectives. It is a commitment to finding the universal principles that can guide us as we navigate the complexities of a world where artificial intelligence becomes increasingly prevalent. + + + + +# Community Policy + +Our community is the backbone of our ambitious projects, and as such, we have established a set of core principles that serve as the foundation for our collective efforts. These principles are designed to foster a productive, respectful, and forward-thinking environment where every member can contribute meaningfully. + +## Add Value + +We believe that every member of our community has the potential to add value in a multitude of ways: + +- **Solve Problems**: Whether through innovative coding, insightful problem-solving, or creative thinking, we encourage our members to tackle challenges head-on. + +- **Share Resources**: From sharing knowledge and expertise to providing access to tools and data, the sharing of resources is vital to our collective success. + +- **Spread Useful Information**: Communication is key. We value the dissemination of information that can help propel our projects forward. + +- **Contribute Code**: Every line of code can make a difference. We welcome contributions that drive us closer to our goals of full autonomy. + +## Don't Waste Time + +Time is our most precious resource. In recognition of this, we urge our community to: + +- **Be Efficient**: Focus your efforts on activities that align with our higher objectives and make the most of the time you invest. + +- **Respect Others' Time**: Engage with the community in a way that is considerate of others' time, ensuring that interactions are meaningful and productive. + +- **Prioritize Wisely**: Use our mission and goals as a guiding North Star to prioritize tasks and initiatives that will have the greatest impact. + +## Do No Harm + +Our community is a space for growth, not destruction. We are committed to: + +- **Personal Safety**: Ensure that your actions do not harm yourself or others. Prioritize mental and physical well-being in all your endeavors. + +- **Community Well-being**: Foster a community environment that is supportive, inclusive, and free from destructive behaviors. + +- **Ethical Conduct**: Uphold the highest standards of ethical behavior in all your contributions. Avoid actions that could damage the integrity or reputation of the community. + +By adhering to these principles, we can ensure that our community remains a place where innovation thrives, time is honored, and all members can work together in a safe and supportive environment. \ No newline at end of file From fede95f092e14af3aefdb7ece711fb8cf0fab227 Mon Sep 17 00:00:00 2001 From: FireMMDC Date: Thu, 30 Nov 2023 18:28:30 -0500 Subject: [PATCH 138/141] Adding pathing for module, removing stary charcater --- agents/agent_builder/create.py | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/agents/agent_builder/create.py b/agents/agent_builder/create.py index 595be56..8e4652c 100644 --- a/agents/agent_builder/create.py +++ b/agents/agent_builder/create.py @@ -8,6 +8,10 @@ def create_assistants(): agents_path = "agents" client = get_openai_client() + agents_path = os.path.join( + Path(__file__).absolute().parent, agents_path + ) + # Check if the 'agents' folder is empty or doesn't exist if ( not os.path.exists(agents_path) @@ -139,6 +143,6 @@ def create_assistants(): # Create the assistant using the uploaded file IDs if files exist assistant = client.beta.assistants.create(**create_params) print("***********************************************") -n + if __name__ == '__main__': create_assistants() \ No newline at end of file From 40bec6ae4a17329fc95f90efdcf1554200be148d Mon Sep 17 00:00:00 2001 From: Chad Phillips Date: Thu, 30 Nov 2023 21:11:55 -0500 Subject: [PATCH 139/141] implement Jinja template manager, template agent instructions --- agents/manual_assistants/OAIWrapper.py | 16 ++- agents/manual_assistants/README.md | 4 +- .../definitions/boss-worker3/agents.yaml | 101 ++++++---------- .../boss-worker3/boss_instructions.md | 13 +++ .../boss-worker3/worker_instructions.md | 13 +++ .../definitions/pool-rules/_instructions.md | 7 ++ .../definitions/pool-rules/_mission.md | 1 + .../definitions/pool-rules/agents.yaml | 110 ++++++------------ .../pool-rules/bob_instructions.md | 11 ++ .../pool-rules/boss_instructions.md | 13 +++ .../pool-rules/linda_instructions.md | 11 ++ .../pool-rules/nick_instructions.md | 11 ++ .../definitions/text-manipulation/agents.yaml | 21 ++-- .../lowercase_instructions.md | 9 ++ .../definitions/text-manipulation/prompts.md | 25 ---- .../random_case_instructions.md | 9 ++ .../uppercase_instructions.md | 9 ++ agents/manual_assistants/requirements.txt | 1 + agents/manual_assistants/run.py | 10 +- agents/manual_assistants/template_manager.py | 85 ++++++++++++++ 20 files changed, 302 insertions(+), 178 deletions(-) create mode 100644 agents/manual_assistants/definitions/boss-worker3/boss_instructions.md create mode 100644 agents/manual_assistants/definitions/boss-worker3/worker_instructions.md create mode 100644 agents/manual_assistants/definitions/pool-rules/_instructions.md create mode 100644 agents/manual_assistants/definitions/pool-rules/_mission.md create mode 100644 agents/manual_assistants/definitions/pool-rules/bob_instructions.md create mode 100644 agents/manual_assistants/definitions/pool-rules/boss_instructions.md create mode 100644 agents/manual_assistants/definitions/pool-rules/linda_instructions.md create mode 100644 agents/manual_assistants/definitions/pool-rules/nick_instructions.md create mode 100644 agents/manual_assistants/definitions/text-manipulation/lowercase_instructions.md create mode 100644 agents/manual_assistants/definitions/text-manipulation/random_case_instructions.md create mode 100644 agents/manual_assistants/definitions/text-manipulation/uppercase_instructions.md create mode 100644 agents/manual_assistants/template_manager.py diff --git a/agents/manual_assistants/OAIWrapper.py b/agents/manual_assistants/OAIWrapper.py index 377b689..5d99319 100644 --- a/agents/manual_assistants/OAIWrapper.py +++ b/agents/manual_assistants/OAIWrapper.py @@ -4,20 +4,22 @@ from openai import OpenAI, NotFoundError from logger import AgentLogger from function_manager import FunctionManager +from template_manager import TemplateManager class OAIWrapper: - def __init__(self, client: OpenAI, agent: Agent, function_manager: FunctionManager): + def __init__(self, client: OpenAI, agent: Agent, function_manager: FunctionManager, template_manager: TemplateManager): self.client = client self.agent = agent self.function_manager = function_manager + self.template_manager = template_manager self.log = AgentLogger(self.agent.name, self.agent) def createAssistant(self): assistant = self.client.beta.assistants.create( name=self.agent.name, - instructions=self.agent.instructions, + instructions='', model=self.agent.model ) self.agent.id = assistant.id @@ -29,7 +31,7 @@ def updateAssistant(self): self.client.beta.assistants.update( assistant_id=self.agent.id, name=self.agent.name, - instructions=self.agent.instructions, + instructions=self.getAgentInstructions(), tools=toolList, model=self.agent.model ) @@ -39,6 +41,14 @@ def updateAssistant(self): self.log.error("Remove the cached assistants .env file in the definition directory and try again.") sys.exit(1) + def getAgentInstructions(self): + success, instructions, user_message = self.template_manager.render_agent_template(self.agent) + if success: + self.log.debug(f"Rendered agent instructions: {instructions}") + return instructions + self.log.error(user_message) + sys.exit(1) + def getAgentTools(self): toolList = [] if hasattr(self.agent, "tools"): diff --git a/agents/manual_assistants/README.md b/agents/manual_assistants/README.md index 6968cf7..b9cc155 100644 --- a/agents/manual_assistants/README.md +++ b/agents/manual_assistants/README.md @@ -10,7 +10,7 @@ pip install -r requirements.txt # Create a .env file on this path containing the OpenAI API key you wish to use. https://platform.openai.com/api-keys # OPENAI_API_KEY=sk-********** -python run.py --agents-definition-folder definitions/boss-worker3/ +python run.py --agents-definition-folder definitions/boss-worker3 ``` # Observations @@ -53,4 +53,4 @@ This will allow for many options down the road. Mixing concepts from telecom and ## Thinking functions By introducing a *thinking* function, agents may indicate they’re not ready to provide an answer. The introduction of self-prompting provides breathing room to the model and reduces the chances of subpar messages propagating through the network. -From the framework side it will be be seen as an initial function call (*thinking*) which will be immediately answered with an ACK. Afterwards a that thread would be prompted to *continue working*. \ No newline at end of file +From the framework side it will be be seen as an initial function call (*thinking*) which will be immediately answered with an ACK. Afterwards a that thread would be prompted to *continue working*. diff --git a/agents/manual_assistants/definitions/boss-worker3/agents.yaml b/agents/manual_assistants/definitions/boss-worker3/agents.yaml index 0800794..3a7b7ed 100644 --- a/agents/manual_assistants/definitions/boss-worker3/agents.yaml +++ b/agents/manual_assistants/definitions/boss-worker3/agents.yaml @@ -1,64 +1,37 @@ -- name: "Boss" - tools: ["assign_task"] - talksTo: ["USER", "Worker 1", "Worker 2", "Worker 3"] - initMessage: "Explain how clouds are formed in 100 words or less" - instructions: > - MISSION - - You are a boss agent in charge of three worker agents. - - You'll be handed a project to work on and are expected to delegate on the workers. - - Send tasks to the workers one a time. They will collaborate on the tasks you provide and get back to you. - - Wait for a worker response before sending another task. - - Once you're satisfied with the information received from the workers, put it together and send the final result back to the user. - - INSTRUCTIONS - - Complete the task in your mission. - - To talk to other agents call the function 'send_message'. At the beginning of the message identify yourself. - - Agents: ["USER", "Worker 1", "Worker 2", "Worker 3"] -- name: "Worker 1" - tools: ["resolve_task", "broadcast"] - talksTo: ["Boss", "Worker 2", "Worker 3"] - channels: ["Worker"] - instructions: > - MISSION - You are "Worker 1", one of three identical worker agents under a boss agent. If you receive a task from your boss let the other workers know, then collaborate to accomplish it. Once you all agree that the task is complete send the results back to the boss. - - INSTRUCTIONS - - Complete the task in your mission. - - To talk to other worker agents call the function 'broadcast'. At the beginning of the message identify yourself. - - If you receive a message from the boss let the other workers know and start working together on the mission. Make sure to pass the task id provided by the boss. - - If you receive a message from other workers don't reply back unless necessary. Keep the worker channel as free from noise as possible. Share results in the channel to advance the mission, but do not send acknowledgements. - - Try to solve the task quickly, with limited interaction with other workers. - - To send the task results back to the boss call the function 'resolve_task'. Pass the id recieved from the boss when the task was assigned. - - Channels: [{'name': 'Worker', 'agents': ['Worker 1', 'Worker 2', 'Worker 3']}] -- name: "Worker 2" - tools: ["resolve_task", "broadcast"] - talksTo: ["Boss", "Worker 1", "Worker 3"] - channels: ["Worker"] - instructions: > - MISSION - You are "Worker 2", one of three identical worker agents under a boss agent. If you receive a task from your boss let the other workers know, then collaborate to accomplish it. Once you all agree that the task is complete send the results back to the boss. - - INSTRUCTIONS - - Complete the task in your mission. - - To talk to other worker agents call the function 'broadcast'. At the beginning of the message identify yourself. - - If you receive a message from the boss let the other workers know and start working together on the mission. Make sure to pass the task id provided by the boss. - - If you receive a message from other workers don't reply back unless necessary. Keep the worker channel as free from noise as possible. Share results in the channel to advance the mission, but do not send acknowledgements. - - Try to solve the task quickly, with limited interaction with other workers. - - To send the task results back to the boss call the function 'resolve_task'. Pass the id recieved from the boss when the task was assigned. - - Channels: [{'name': 'Worker', 'agents': ['Worker 1', 'Worker 2', 'Worker 3']}] -- name: "Worker 3" - tools: ["resolve_task", "broadcast"] - talksTo: ["Boss", "Worker 1", "Worker 2"] - channels: ["Worker"] - instructions: > - MISSION - You are "Worker 3", one of three identical worker agents under a boss agent. If you receive a task from your boss let the other workers know, then collaborate to accomplish it. Once you all agree that the task is complete send the results back to the boss. - - INSTRUCTIONS - - Complete the task in your mission. - - To talk to other worker agents call the function 'broadcast'. At the beginning of the message identify yourself. - - If you receive a message from the boss let the other workers know and start working together on the mission. Make sure to pass the task id provided by the boss. - - If you receive a message from other workers don't reply back unless necessary. Keep the worker channel as free from noise as possible. Share results in the channel to advance the mission, but do not send acknowledgements. - - Try to solve the task quickly, with limited interaction with other workers. - - To send the task results back to the boss call the function 'resolve_task'. Pass the id recieved from the boss when the task was assigned. - - Channels: [{'name': 'Worker', 'agents': ['Worker 1', 'Worker 2', 'Worker 3']}] +- name: Boss + tools: + - assign_task + talksTo: + - USER + - Worker 1 + - Worker 2 + - Worker 3 + instructions: boss_instructions.md + initMessage: Explain how clouds are formed in 100 words or less +- name: Worker 1 + tools: &workerTools + - resolve_task + - broadcast + talksTo: + - Boss + - Worker 2 + - Worker 3 + instructions: &workerInstructions worker_instructions.md + channels: &workerChannels + - Worker +- name: Worker 2 + tools: *workerTools + talksTo: + - Boss + - Worker 1 + - Worker 3 + instructions: *workerInstructions + channels: *workerChannels +- name: Worker 3 + tools: *workerTools + talksTo: + - Boss + - Worker 1 + - Worker 2 + instructions: *workerInstructions + channels: *workerChannels diff --git a/agents/manual_assistants/definitions/boss-worker3/boss_instructions.md b/agents/manual_assistants/definitions/boss-worker3/boss_instructions.md new file mode 100644 index 0000000..28ecf1d --- /dev/null +++ b/agents/manual_assistants/definitions/boss-worker3/boss_instructions.md @@ -0,0 +1,13 @@ +# MISSION + + * You are a boss agent in charge of three worker agents. + * You'll be handed a project to work on and are expected to delegate on the workers. + * Send tasks to the workers one a time. They will collaborate on the tasks you provide and get back to you. + * Wait for a worker response before sending another task. + * Once you're satisfied with the information received from the workers, put it together and send the final result back to the user. + +# INSTRUCTIONS + + * Complete the task in your mission. + * To talk to other agents call the function 'assign_task'. At the beginning of the message identify yourself. + * Agents: {{ talksTo }} diff --git a/agents/manual_assistants/definitions/boss-worker3/worker_instructions.md b/agents/manual_assistants/definitions/boss-worker3/worker_instructions.md new file mode 100644 index 0000000..a387b69 --- /dev/null +++ b/agents/manual_assistants/definitions/boss-worker3/worker_instructions.md @@ -0,0 +1,13 @@ +# MISSION + +You are "{{ name }}", one of three identical worker agents under a boss agent. If you receive a task from your boss let the other workers know, then collaborate to accomplish it. Once you all agree that the task is complete send the results back to the boss. + +# INSTRUCTIONS + +* Complete the task in your mission. +* To talk to other worker agents call the function 'broadcast'. At the beginning of the message identify yourself. +* If you receive a message from the boss let the other workers know and start working together on the mission. Make sure to pass the task id provided by the boss. +* If you receive a message from other workers don't reply back unless necessary. Keep the worker channel as free from noise as possible. Share results in the channel to advance the mission, but do not send acknowledgements. +* Try to solve the task quickly, with limited interaction with other workers. +* To send the task results back to the boss call the function 'resolve_task'. Pass the id recieved from the boss when the task was assigned. +* Channels: [{'name': 'Worker', 'agents': ['Worker 1', 'Worker 2', 'Worker 3']}] diff --git a/agents/manual_assistants/definitions/pool-rules/_instructions.md b/agents/manual_assistants/definitions/pool-rules/_instructions.md new file mode 100644 index 0000000..2c01322 --- /dev/null +++ b/agents/manual_assistants/definitions/pool-rules/_instructions.md @@ -0,0 +1,7 @@ + * Complete the task in your mission. + * To talk to other worker agents call the function 'broadcast'. At the beginning of the message identify yourself. + * If you receive a message from the boss let the other workers know and start working together on the mission. Make sure to pass the task id provided by the boss. + * If you receive a message from other workers don't reply back unless necessary. Keep the worker channel as free from noise as possible. Share results in the channel to advance the mission, but do not send acknowledgements. + * Try to solve the task quickly, with limited interaction with other workers. + * To send the task results back to the boss call the function 'resolve_task'. Pass the id recieved from the boss when the task was assigned. + * Channels: [{'name': 'Worker', 'agents': ['Bob', 'Linda', 'Nick']}] diff --git a/agents/manual_assistants/definitions/pool-rules/_mission.md b/agents/manual_assistants/definitions/pool-rules/_mission.md new file mode 100644 index 0000000..ccab860 --- /dev/null +++ b/agents/manual_assistants/definitions/pool-rules/_mission.md @@ -0,0 +1 @@ +You are "{{ name }}", one of three identical worker agents under a boss agent. If you receive a task from your boss let the other workers know, then collaborate to accomplish it. Once you all agree that the task is complete send the results back to the boss. diff --git a/agents/manual_assistants/definitions/pool-rules/agents.yaml b/agents/manual_assistants/definitions/pool-rules/agents.yaml index 3b031c3..48af87b 100644 --- a/agents/manual_assistants/definitions/pool-rules/agents.yaml +++ b/agents/manual_assistants/definitions/pool-rules/agents.yaml @@ -1,73 +1,37 @@ -- name: "Boss" - tools: ["assign_task"] - talksTo: ["USER", "Bob", "Linda", "Nick"] - initMessage: "Design a list of rules for a local swimming pool " - instructions: > - # MISSION - - You are a boss agent in charge of three worker agents. - - You'll be handed a project to work on and are expected to delegate on the workers. - - Send tasks to the workers one a time. They will collaborate on the tasks you provide and get back to you. - - Wait for a worker response before sending another task. - - Once you're satisfied with the information received from the workers, put it together and send the final result back to the user. - - # INSTRUCTIONS - - Complete the task in your mission. - - To talk to other agents call the function 'send_message'. At the beginning of the message identify yourself. - - Agents: ["USER", "Bob", "Linda", "Nick"] -- name: "Bob" - tools: ["resolve_task", "broadcast"] - talksTo: ["Boss", "Linda", "Nick"] - channels: ["Worker"] - instructions: > - # MISSION - You are "Bob", one of three identical worker agents under a boss agent. If you receive a task from your boss let the other workers know, then collaborate to accomplish it. Once you all agree that the task is complete send the results back to the boss. - - # PERSONA - Your belief system includes a strong 'safety first' orientation. You prefer that everyone is safe, even if it means restricting people's freedoms. - - # INSTRUCTIONS - - Complete the task in your mission. - - To talk to other worker agents call the function 'broadcast'. At the beginning of the message identify yourself. - - If you receive a message from the boss let the other workers know and start working together on the mission. Make sure to pass the task id provided by the boss. - - If you receive a message from other workers don't reply back unless necessary. Keep the worker channel as free from noise as possible. Share results in the channel to advance the mission, but do not send acknowledgements. - - Try to solve the task quickly, with limited interaction with other workers. - - To send the task results back to the boss call the function 'resolve_task'. Pass the id recieved from the boss when the task was assigned. - - Channels: [{'name': 'Worker', 'agents': ['Bob', 'Linda', 'Nick']}] -- name: "Linda" - tools: ["resolve_task", "broadcast"] - talksTo: ["Boss", "Bob", "Nick"] - channels: ["Worker"] - instructions: > - # MISSION - You are "Linda", one of three identical worker agents under a boss agent. If you receive a task from your boss let the other workers know, then collaborate to accomplish it. Once you all agree that the task is complete send the results back to the boss. - - # PERSONA - Your belief system includes strong preference for personal freedom. You think people should be able to take responsibility for their actions and act freely, even if that means sometimes things aren't perfectly safe. - - # INSTRUCTIONS - - Complete the task in your mission. - - To talk to other worker agents call the function 'broadcast'. At the beginning of the message identify yourself. - - If you receive a message from the boss let the other workers know and start working together on the mission. Make sure to pass the task id provided by the boss. - - If you receive a message from other workers don't reply back unless necessary. Keep the worker channel as free from noise as possible. Share results in the channel to advance the mission, but do not send acknowledgements. - - Try to solve the task quickly, with limited interaction with other workers. - - To send the task results back to the boss call the function 'resolve_task'. Pass the id recieved from the boss when the task was assigned. - - Channels: [{'name': 'Worker', 'agents': ['Bob', 'Linda', 'Nick']}] -- name: "Nick" - tools: ["resolve_task", "broadcast"] - talksTo: ["Boss", "Bob", "Linda"] - channels: ["Worker"] - instructions: > - # MISSION - You are "Nick", one of three identical worker agents under a boss agent. If you receive a task from your boss let the other workers know, then collaborate to accomplish it. Once you all agree that the task is complete send the results back to the boss. - - # PERSONA - Your belief system includes a balanced view of safety and freedom. You realize there are rational tradeoffs to be made between the two, and there is no solution that will maximize for both at the same time. - - # INSTRUCTIONS - - Complete the task in your mission. - - To talk to other worker agents call the function 'broadcast'. At the beginning of the message identify yourself. - - If you receive a message from the boss let the other workers know and start working together on the mission. Make sure to pass the task id provided by the boss. - - If you receive a message from other workers don't reply back unless necessary. Keep the worker channel as free from noise as possible. Share results in the channel to advance the mission, but do not send acknowledgements. - - Try to solve the task quickly, with limited interaction with other workers. - - To send the task results back to the boss call the function 'resolve_task'. Pass the id recieved from the boss when the task was assigned. - - Channels: [{'name': 'Worker', 'agents': ['Bob', 'Linda', 'Nick']}] +- name: Boss + tools: + - assign_task + talksTo: + - USER + - Bob + - Linda + - Nick + initMessage: Design a list of rules for a local swimming pool. + instructions: boss_instructions.md +- name: Bob + tools: &workerTools + - resolve_task + - broadcast + talksTo: + - Boss + - Linda + - Nick + channels: &workerChannels + - Worker + instructions: bob_instructions.md +- name: Linda + tools: *workerTools + talksTo: + - Boss + - Bob + - Nick + channels: *workerChannels + instructions: linda_instructions.md +- name: Nick + tools: *workerTools + talksTo: + - Boss + - Bob + - Linda + channels: *workerChannels + instructions: nick_instructions.md diff --git a/agents/manual_assistants/definitions/pool-rules/bob_instructions.md b/agents/manual_assistants/definitions/pool-rules/bob_instructions.md new file mode 100644 index 0000000..82f3b5b --- /dev/null +++ b/agents/manual_assistants/definitions/pool-rules/bob_instructions.md @@ -0,0 +1,11 @@ +# MISSION + +{% include "_mission.md" %} + +# PERSONA + +Your belief system includes a strong 'safety first' orientation. You prefer that everyone is safe, even if it means restricting people's freedoms. + +# INSTRUCTIONS + +{% include "_instructions.md" %} diff --git a/agents/manual_assistants/definitions/pool-rules/boss_instructions.md b/agents/manual_assistants/definitions/pool-rules/boss_instructions.md new file mode 100644 index 0000000..28ecf1d --- /dev/null +++ b/agents/manual_assistants/definitions/pool-rules/boss_instructions.md @@ -0,0 +1,13 @@ +# MISSION + + * You are a boss agent in charge of three worker agents. + * You'll be handed a project to work on and are expected to delegate on the workers. + * Send tasks to the workers one a time. They will collaborate on the tasks you provide and get back to you. + * Wait for a worker response before sending another task. + * Once you're satisfied with the information received from the workers, put it together and send the final result back to the user. + +# INSTRUCTIONS + + * Complete the task in your mission. + * To talk to other agents call the function 'assign_task'. At the beginning of the message identify yourself. + * Agents: {{ talksTo }} diff --git a/agents/manual_assistants/definitions/pool-rules/linda_instructions.md b/agents/manual_assistants/definitions/pool-rules/linda_instructions.md new file mode 100644 index 0000000..1504882 --- /dev/null +++ b/agents/manual_assistants/definitions/pool-rules/linda_instructions.md @@ -0,0 +1,11 @@ +# MISSION + +{% include "_mission.md" %} + +# PERSONA + +Your belief system includes strong preference for personal freedom. You think people should be able to take responsibility for their actions and act freely, even if that means sometimes things aren't perfectly safe. + +# INSTRUCTIONS + +{% include "_instructions.md" %} diff --git a/agents/manual_assistants/definitions/pool-rules/nick_instructions.md b/agents/manual_assistants/definitions/pool-rules/nick_instructions.md new file mode 100644 index 0000000..f4e5880 --- /dev/null +++ b/agents/manual_assistants/definitions/pool-rules/nick_instructions.md @@ -0,0 +1,11 @@ +# MISSION + +{% include "_mission.md" %} + +# PERSONA + +Your belief system includes a balanced view of safety and freedom. You realize there are rational tradeoffs to be made between the two, and there is no solution that will maximize for both at the same time. + +# INSTRUCTIONS + +{% include "_instructions.md" %} diff --git a/agents/manual_assistants/definitions/text-manipulation/agents.yaml b/agents/manual_assistants/definitions/text-manipulation/agents.yaml index 5ca8deb..e8d2268 100644 --- a/agents/manual_assistants/definitions/text-manipulation/agents.yaml +++ b/agents/manual_assistants/definitions/text-manipulation/agents.yaml @@ -1,8 +1,13 @@ -- name: "Uppercase" - id: "asst_OswAEoT5NnteGEzNIP9UOa7S" - talksTo: ["USER", "Lowercase", "Random case"] -- name: "Lowercase" - id: "asst_utnfDavVtGWkjFD3BEqXGR2O" - talksTo: ["Random case"] -- name: "Random case" - id: "asst_wFxS2rVmN9rjKDy7znaeNo3T" \ No newline at end of file +- name: Uppercase + talksTo: + - USER + - Lowercase + - Random case + instructions: uppercase_instructions.md +- name: Lowercase + talksTo: + - Random case + instructions: lowercase_instructions.md +- name: Random case + talksTo: [] + instructions: random_case_instructions.md diff --git a/agents/manual_assistants/definitions/text-manipulation/lowercase_instructions.md b/agents/manual_assistants/definitions/text-manipulation/lowercase_instructions.md new file mode 100644 index 0000000..9d44bb7 --- /dev/null +++ b/agents/manual_assistants/definitions/text-manipulation/lowercase_instructions.md @@ -0,0 +1,9 @@ +# MISSION + +Take the input message and convert it to lowercase. + +# INSTRUCTIONS + + * Complete the task in your mission. + * For each agent in your downstream agents list, call the function 'sendMessage' send the message you've produced. Set the 'recipient' as the name of the downstream agent. + * Downstream agents: {{ talksTo }} diff --git a/agents/manual_assistants/definitions/text-manipulation/prompts.md b/agents/manual_assistants/definitions/text-manipulation/prompts.md index 596b636..8b13789 100644 --- a/agents/manual_assistants/definitions/text-manipulation/prompts.md +++ b/agents/manual_assistants/definitions/text-manipulation/prompts.md @@ -1,26 +1 @@ -# Uppercase -MISSION -Take the input message and convert it to uppercase. -INSTRUCTIONS -- Complete the task in your mission. -- For each agent in your downstream agents list, call the function 'sendMessage' send the message you've produced. Set the 'recipient' as the name of the downstream agent. -- Downstream agents: ['Lowercase', 'Random case'] - -# Lowercase -MISSION -Take the input message and convert it to lowercase. - -INSTRUCTIONS -- Complete the task in your mission. -- For each agent in your downstream agents list, call the function 'sendMessage' send the message you've produced. Set the 'recipient' as the name of the downstream agent. -- Downstream agents: ['Random case'] - -# Random case -MISSION -Take the input message and convert it to random case. - -INSTRUCTIONS -- Complete the task in your mission. -- For each agent in your downstream agents list, call the function 'sendMessage' send the message you've produced. Set the 'recipient' as the name of the downstream agent. -- Downstream agents: [] \ No newline at end of file diff --git a/agents/manual_assistants/definitions/text-manipulation/random_case_instructions.md b/agents/manual_assistants/definitions/text-manipulation/random_case_instructions.md new file mode 100644 index 0000000..34b9a32 --- /dev/null +++ b/agents/manual_assistants/definitions/text-manipulation/random_case_instructions.md @@ -0,0 +1,9 @@ +# MISSION + +Take the input message and convert it to random case. + +# INSTRUCTIONS + + * Complete the task in your mission. + * For each agent in your downstream agents list, call the function 'sendMessage' send the message you've produced. Set the 'recipient' as the name of the downstream agent. + * Downstream agents: {{ talksTo }} diff --git a/agents/manual_assistants/definitions/text-manipulation/uppercase_instructions.md b/agents/manual_assistants/definitions/text-manipulation/uppercase_instructions.md new file mode 100644 index 0000000..4ae0ddc --- /dev/null +++ b/agents/manual_assistants/definitions/text-manipulation/uppercase_instructions.md @@ -0,0 +1,9 @@ +# MISSION + +Take the input message and convert it to uppercase. + +# INSTRUCTIONS + + * Complete the task in your mission. + * For each agent in your downstream agents list, call the function 'sendMessage' send the message you've produced. Set the 'recipient' as the name of the downstream agent. + * Downstream agents: {{ talksTo }} diff --git a/agents/manual_assistants/requirements.txt b/agents/manual_assistants/requirements.txt index 0c9d0a5..a5fae35 100644 --- a/agents/manual_assistants/requirements.txt +++ b/agents/manual_assistants/requirements.txt @@ -1,4 +1,5 @@ docutils>=0.20.1 +Jinja2 openai PyYAML requests diff --git a/agents/manual_assistants/run.py b/agents/manual_assistants/run.py index 29e1794..2a08518 100644 --- a/agents/manual_assistants/run.py +++ b/agents/manual_assistants/run.py @@ -12,6 +12,7 @@ from agent import Agent from agentProcessor import AgentProcessor from function_manager import FunctionManager +from template_manager import TemplateManager from OAIWrapper import OAIWrapper import agentEnvHandler @@ -36,7 +37,8 @@ # Construct the absolute path to the agents.yaml file workDir = pathlib.Path(__file__).parent.resolve() -agentsYAML = os.path.join(workDir, args.agentsDefinitionFolder, "agents.yaml") +agentsDefinitionDir = os.path.join(workDir, args.agentsDefinitionFolder) +agentsYAML = os.path.join(agentsDefinitionDir, "agents.yaml") # Check if the provided file path exists if not os.path.isfile(agentsYAML): @@ -50,7 +52,7 @@ ctx = Context(client, agents) # LOAD ENV IDs -agentsIdsFile = os.path.join(workDir, args.agentsDefinitionFolder, "agentsIds.env") +agentsIdsFile = os.path.join(agentsDefinitionDir, "agentsIds.env") # Ensure the file exists by opening it in append mode, then immediately close it with open(agentsIdsFile, 'a'): pass @@ -68,10 +70,12 @@ function_manager = FunctionManager() function_manager.load_functions() +template_manager = TemplateManager([agentsDefinitionDir]) +template_manager.load_templates() # Create/update assistants. for agent in agents: - oai_wrapper = OAIWrapper(client, agent, function_manager) + oai_wrapper = OAIWrapper(client, agent, function_manager, template_manager) if not hasattr(agent, 'id'): # It's a new agent oai_wrapper.createAssistant() agentEnvHandler.saveId(agentsIdsFile, agent) diff --git a/agents/manual_assistants/template_manager.py b/agents/manual_assistants/template_manager.py new file mode 100644 index 0000000..b01789a --- /dev/null +++ b/agents/manual_assistants/template_manager.py @@ -0,0 +1,85 @@ +import copy + +from jinja2 import Environment, FileSystemLoader, TemplateNotFound + +from logger import Logger, AgentLogger + + +class TemplateManager: + """ + Manage templates. + """ + + def __init__(self, template_dirs=None): + """ + Initializes the class with the given template directories. + + :param template_dirs: The list of directories to search for templates. + :type template_dirs: list, optional + """ + self.log = Logger(self.__class__.__name__) + self.template_dirs = template_dirs or [] + self.templates = [] + self.templates_env = None + + def load_templates(self): + """ + Load templates from directories. + + :return: None + """ + self.log.debug("Loading templates from dirs: %s" % ", ".join(self.template_dirs)) + self.templates_env = Environment(loader=FileSystemLoader(self.template_dirs)) + self.templates = self.templates_env.list_templates() or [] + + def get_template(self, template_name): + """ + Fetches a template. + + :param template_name: The name of the template to fetch + :type template_name: str + :return: The fetched template, or None if the template is not found + :rtype: Template or None + """ + try: + template = self.templates_env.get_template(template_name) + except TemplateNotFound: + return False, None, f"Template not found: {template_name}" + return True, template, f"Retrieved template: {template_name}" + + def render_template(self, template_name, variables=None): + """ + Render a template with variable substitutions. + + :param agent: The associated agent. + :type agent: object + :param template_name: The name of the template to render + :type template_name: str + :return: A tuple containing a success flag, the rendered message or template name, and a user message + :rtype: tuple + """ + variables = variables or {} + success, template, user_message = self.get_template(template_name) + if not success: + return success, template_name, user_message + try: + message = template.render(**variables) + user_message = f"Rendered template: {template_name}" + self.log.debug(user_message) + return True, message, user_message + except Exception as e: + user_message = f"Error rendering template: {e}" + self.log.error(user_message) + return False, None, user_message + + def render_agent_template(self, agent, variables=None): + agent_log = AgentLogger(agent.name, agent) + final_variables = vars(agent) + final_variables.update(variables or {}) + agent_log.debug(f"Rendering template for agent {agent.name}", extra={'variables': final_variables}) + try: + return self.render_template(agent.instructions, final_variables) + except Exception as e: + message = f"Error rendering template for agent {agent.name}: {e}" + agent_log.error(message) + return False, None, message From 140a9dc95fe7e68b0da59c601ca022a4deb6527f Mon Sep 17 00:00:00 2001 From: Chad Phillips Date: Thu, 30 Nov 2023 21:18:17 -0500 Subject: [PATCH 140/141] clean up dead files --- .../definitions/boss-worker3/prompts.md | 50 ------------------- .../definitions/text-manipulation/prompts.md | 1 - 2 files changed, 51 deletions(-) delete mode 100644 agents/manual_assistants/definitions/boss-worker3/prompts.md delete mode 100644 agents/manual_assistants/definitions/text-manipulation/prompts.md diff --git a/agents/manual_assistants/definitions/boss-worker3/prompts.md b/agents/manual_assistants/definitions/boss-worker3/prompts.md deleted file mode 100644 index 4fdb77d..0000000 --- a/agents/manual_assistants/definitions/boss-worker3/prompts.md +++ /dev/null @@ -1,50 +0,0 @@ -# Boss -MISSION -- You are a boss agent in charge of three worker agents. -- You'll be given a project to work on. Think step by step about how to tackle it. -- Split it in reasonable tasks and send them to a worker agent one at a time. Wait for the answer to the first worker task before sending another task. -- Once you're satisfied with the information received from the workers, put it together and send the final result back to the user. - -INSTRUCTIONS -- Complete the task in your mission. -- To assign a task to the workers call the function 'assignTask'. At the beginning of the message identify yourself. -- Agents: ["USER", "Worker 1", "Worker 2", "Worker 3"] - -# Worker 1 -MISSION -You are "Worker 1", one of three identical worker agents under a boss agent. If you receive a task from your boss let the other workers know, then collaborate to accomplish it. Once you all agree that the task is complete send the results back to the boss. - -INSTRUCTIONS -- Complete the task in your mission. -- To talk to other worker agents call the function 'broadcast'. At the beginning of the message identify yourself. -- If you receive a message from the boss let the other workers know and start working together on the mission. Make sure to pass the task id provided by the boss. -- If you receive a message from other workers don't reply back unless necessary. Keep the worker channel as free from noise as possible. Share results in the channel to advance the mission, but do not send acknowledgements. -- Try to solve the task quickly, with limited interaction with other workers. -- To send the task results back to the boss call the function 'resolveTask'. Pass the id recieved from the boss when the task was assigned. -- Channels: [{'name': 'Worker', 'agents': ['Worker 1', 'Worker 2', 'Worker 3']}] - -# Worker 2 -MISSION -You are "Worker 2", one of three identical worker agents under a boss agent. If you receive a task from your boss let the other workers know, then collaborate to accomplish it. Once you all agree that the task is complete send the results back to the boss. - -INSTRUCTIONS -- Complete the task in your mission. -- To talk to other worker agents call the function 'broadcast'. At the beginning of the message identify yourself. -- If you receive a message from the boss let the other workers know and start working together on the mission. -- If you receive a message from other workers don't reply back unless necessary. Keep the worker channel as free from noise as possible. Share results in the channel to advance the mission, but do not send acknowledgements. -- Try to solve the task quickly, with limited interaction with other workers. -- To send the task results back to the boss call the function 'resolveTask'. -- Channels: [{'name': 'Worker', 'agents': ['Worker 1', 'Worker 2', 'Worker 3']}] - -# Worker 3 -MISSION -You are "Worker 3", one of three identical worker agents under a boss agent. If you receive a task from your boss let the other workers know, then collaborate to accomplish it. Once you all agree that the task is complete send the results back to the boss. - -INSTRUCTIONS -- Complete the task in your mission. -- To talk to other worker agents call the function 'broadcast'. At the beginning of the message identify yourself. -- If you receive a message from the boss let the other workers know and start working together on the mission. -- If you receive a message from other workers don't reply back unless necessary. Keep the worker channel as free from noise as possible. Share results in the channel to advance the mission, but do not send acknowledgements. -- Try to solve the task quickly, with limited interaction with other workers. -- To send the task results back to the boss call the function 'resolveTask'. -- Channels: [{'name': 'Worker', 'agents': ['Worker 1', 'Worker 2', 'Worker 3']}] \ No newline at end of file diff --git a/agents/manual_assistants/definitions/text-manipulation/prompts.md b/agents/manual_assistants/definitions/text-manipulation/prompts.md deleted file mode 100644 index 8b13789..0000000 --- a/agents/manual_assistants/definitions/text-manipulation/prompts.md +++ /dev/null @@ -1 +0,0 @@ - From 1b24ba7fb1735f72baf68fb4809a41c7413219c8 Mon Sep 17 00:00:00 2001 From: Chad Phillips Date: Thu, 30 Nov 2023 21:18:27 -0500 Subject: [PATCH 141/141] add to README --- agents/manual_assistants/definitions/boss-worker3/README.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/agents/manual_assistants/definitions/boss-worker3/README.md b/agents/manual_assistants/definitions/boss-worker3/README.md index e69de29..3ca5bc6 100644 --- a/agents/manual_assistants/definitions/boss-worker3/README.md +++ b/agents/manual_assistants/definitions/boss-worker3/README.md @@ -0,0 +1,3 @@ +# Boss/worker + +An attempt to get three different workers with identical configuration to collaborate on a task.