Learn about conditions and settings in the Fastly UI, how you can customize how caching works
Get started with custom setting and VCL with Fastly
When a request goes to Faslty it goes to the POP and then gets send to one of the node. The node has a master list of all the domains listed on the service configuration which have been pushed out across the network.
Only one Service configuration per domain. E.g : www.yourSite.com can I have only one Service Configuration.
Subdomains and wildcards
Each subdomain should have his own service configuration, but It’s still possible to apply the same service config to all subdomains if you register a wild card. E.g : *.api.yoursite.com
Hosts
Hosts or also referred to as origin, it’s possible to add either a domain or an IP address.
Host is TLS and without
If you want to accept https
, and non-https
then you need to create separate Hosts.
Shielding
You can enable shielding from the configuration page. In my case the PAR (Paris) POP is defined as a shield.
Load Balancing
A load balancer is used to distribute work evenly among several backends. You can enable this option here.
Linking Logic
Here we have our subroutines and their types :
RECV : Request
FETCH : Cache
DELIVER : Response
Between RECV and FETCH
It’s where maybe we don’t have the object in cache, and where we make the request to the backend. After we get the object back we run FETCH
Between FETCH and DELIVER
It’s where we decide to cache or not the object we received from the backend then DELIVER runs.
What happens within RECV
In RECV we can decide to switch from one backend to another if we want. Our logic is based on conditions, in our condition we can use different parameters like Settings, Content, Data.
Conditions within the subroutines
With the different subroutines, I can write a logic to:
- Decide where to run the request based on request conditions in RECV.
- Decide to cache and fetch from the cache based on cache conditions in FETCH.
- Decide to alter the response based on the response in DELIVER.
Conditions
How to create a condition on the UI
To set up a condition in the Fastly UI, follow these steps:
- Go to your service configuration and clone it.
- Next, navigate to the left sidebar and click on « Conditions. »
- Then, click on « Create condition. »
After you create a condition, you have to create a setting
How to create a setting in the UI
To create settings in the Fastly UI:
Settings
Request setting
Go to « Settings. »
Scroll down until you find « Request Settings » or « Cache Settings. »
In « Request Settings, » you determine how to handle the request without modifying it.
In « Cache Settings, » you decide whether to cache, pass, or perform other actions.
In my request settings, I’ve simply set it to « PASS, » which means the request won’t be cached.
Cache setting
Example : how to exclude API calls from cache
In this video, I’ll demonstrate how to set up a request condition to match all requests containing /api
and then apply a cache setting to prevent caching those requests.
Note that it can take a couple of minutes for the changes to take effect on your app.
Header setting : How to set or rewrite headers ?
Response setting : A custom 404
Another helpful synthetic response could be to set up a redirect. It’s worth noting that you can customize the response status as well.
Takeaways
• Setting types are linked to certain VCL Subroutines
– Request -> Recv
– Cache -> Fetch
– Response -> Deliver
• Using the Ul, you can build configuration setting using:
– Conditions
– Setting
– Content (settings)
– Data (structures)
• Data can be referenced in the logic of your Conditions.
• Snippets are code blocks that can be included in any subroutine.
• Snippets can be Regular (Versioned) or Dynamic (Versionless).
• Custom VCL allows for behaviors that require code blocks in multiple
VCL states :
This article zooms into the steps of dealing with requests using Varnish, shown in a diagram. We’ll mainly look at three important parts: RECV, FETCH, and DELIVER.
RECV, FETCH, and DELIVER.
RECV :
We refer it as a request type, used when we receive the request.
In this condition we have access to the following variables ( to me they are more like objects )
client.*
server.*
reg.*
FETCH :
We refer it as a cache type, used when we cache the request
client.*
server.*
reg.*
resp.*
beresp.*
stands for « Back-end response »
DELIVER:
We refer it as a response type, used when we modify the response before sending it to the end user.
client.*
server.*
reg.*
resp.*
resp.*
is for the response we send back to users.
Examples of what we can access :
client.ip
client.port
client.requests
server.datacenter
server.ip
Server.port
Req.url
Req.host
req.http.*
VLC methods and subroutines
RECV runs after the Delivery Node gets a request.
FETCH runs after we get a backend response & before we cache the object.
DELIVER runs before we send the response to the end user.
Takeaways
• Using the Ul allows you to add logic and configurations into the RECV, FETCH, & DELIVER subroutines.
• RECV runs after the Delivery Node gets a request.
• FETCH runs after we get a backend response & before we cache the object.
• DELIVER runs before we send the response to the end user.
• Logic and configs in the Ul fall into three types: Request – Cache – Response
• Each of the above three type has different variables available to reference.
Without shielding
To understand shielding, let’s look at this diagram. My main server is in Roubaix, France. My aim is to lighten the load on this server as much as possible.
Requests come from all over the world, handled by the nearest Fastly POPs. However, these POPs still need to reach my server to refresh the cache, creating inefficiency as every POP accesses my origin.
To improve this, we can place a cache server between the POPs and my origin. This server, acting as a shield, absorbs requests and delivers cached content, reducing the direct load on my origin server and improving efficiency.
With Shielding
The Paris POP acts as an additional caching layer. Since it’s closest to our origin, it will handle all the load from other POPs when they need to access my origin server.
Multiple shields
You can set up multiple shields if you have multiple origins. Just like before, keep your shields close to your origins for better efficiency.
Skipping the shield with PASS on the request
If « PASS » is called on the request, it skips the shields and goes directly to the origin.
Shielding adds another caching layer
Since the shield is essentially an extra POP, you gain benefits from POP features like clustering, which adds another layer of caching.
How to visualize your shield?
Run the following command
curl -I -H "Fastly-Debug: 1" https://www.antoinebrossault.com
Then observe the fastly-debug-path
:
For that example I ran the command from a german IP
fastly-debug-path:
(D cache-fra-etou8220158-FRA 1740688295) (F cache-fra-etou8220129-FRA 1740688295)
(D cache-par-lfpb1150066-PAR 1740688295) (F cache-par-lfpb1150066-PAR 1740637340)
Here the edge pop ( the first one to process our request ) is cache-fra-etou8220158
and the shield is cache-par-lfpb1150066-PAR
Note: fra
stands for Frankfurt and par
stands for Paris.
We can also see the details of the machines used for the delivery (D) node and the fetching (F) node.
Edge pop (Frankfurt):
(D cache-fra-etou8220158-FRA 1740688295) (F cache-fra-etou8220129-FRA 1740688295)
And our shield (Paris) :
(D cache-par-lfpb1150066-PAR 1740688295) (F cache-par-lfpb1150066-PAR 1740637340)
Takeaways
• The free Shielding feature allows a single POP to act as an additional caching layer
between the user and the origin.
• Each backend configuration can have a different POP selected as it’s Shield POP.
• This allows different Shields to be configured, one per
The clustering nodes
In a Point of Presence (PoP), there are several nodes that work independently. Each node has its own hash table to keep track of cached objects. When a user sends a request, if the node handling the request doesn’t have the requested object in its hash table, the request must go to the origin.
However, this process is inefficient because the chances of the node responsible for responding having the requested object in its cache are not high.
How can we make sure the origin server doesn’t get overloaded?
What we can do instead is create a single node responsible for making requests to the origin
Designate a single node responsible for the object
In this example, there are three users asking for a file. Each user’s request is handled by a separate node. If the requested file isn’t in the cache of the node handling the request, that node will go to another designated node (the chosen one) to fetch the data from the origin.
Reminder on the hash keys and hash table
The parameters used to generate the hash key are:
- The host like :
fastly.antoinebrossault.com
- The URL :
/hello
- And a generation ID :
1
– e.g : 1
Hash for fastly.antoinebrossault.com/hello = 21357f4e1d9a-13a7bc88b63d-1
Recap of the caching behavior regarding of the hashtable :
- A request with a difference in either the Host or the URL will generate a different Hash
Key. - A cacheable response object from a request will be cached with that Hash Key on the
Hash Table. - When another request comes in and generates the same Hash Key, we can check for the matching Hash Key on the Hash Table and can serve the cached response object.
- No key match on the table? It’s not cached, it’s a MISS, fetch it from the backend.
Anatomy of a Point Of Presence :
In a point of presence, there are many nodes. A load balancer in front of these nodes decides which one will handle each incoming request.
Delivering and Fetching node logic in a POP
Here’s a simplified overview of the logic in a POP:
- 1) Request hits the load balancer.
- 2) Load balancer selects a delivering node.
- 3) In the delivering node:
- Check if the request’s hash is in the hashtable.
- If yes, return the object from the node cache.
- If not, proceed to the fetching node.
- 4) In the fetching node:
- Check if the request’s hash is in the hashtable.
- If not, go to the origin.
- 5) After fetching from the origin:
- Populate the cache of the fetching node.
- Send the object back to the delivering node.
- The delivering node caches the object and updates its hashtable.
- 6) Finally, the delivering node sends back the data to the user.
Schema :
Here’s a schema of the logic in a POP, to be honest I wanted to make it simpler, and I ended up with a not 100% clean schema, so sorry about this….
In this schema the black arrows are going forwards when the blue dashed lines represent the request going back.
When Delivering and Fetching node behaviors are disabled
PASS & Restart disable clustering
PASS :
When a request has a « PASS » instruction, caching is unnecessary, so we turn off the clustering behavior.
Restart :
Restart is a way to handle errors from the origin. In this case, we don’t cache the origin response; instead, we return it directly to the user.
A secondary fetching node as a fallback
The usual behavior is as I described earlier, which is shown in this diagram.
But what if the fetching node gets too busy or stops responding? In that case, the delivery node can choose a backup secondary node automatically. This logic is built into the algorithm.
Takeaways
• Clustering is a set of Node behaviors within a POP, that improves caching performance and reduces request load to the origin.
• Nodes can fulfil both the Deliver Behavior and the Fetch Behavior.
• The request hash key is used via algorithm to determine the Primary Node for that request. The Primary then acts as the Fetch Node.
• The Primary, when acting as the Delivery Node, will use the Secondary Node as it’s
Fetch Node.
• When the Primary Node is down, the Secondary Node acts as the Fetch Node.
• Different Hash Key = Different Fetch
What’s request collapsing ?
Concept of request collapsing in the context of a cache system refers to the scenario where multiple identical requests for the same data are received by the cache system simultaneously or within a short period of time. Instead of processing each request separately, the cache system can collapse or combine these identical requests into a single request, thus reducing the load on the backend system and improving overall efficiency.
The goal is to guarantee that only one request for an object will go to origin, instead of many. With Faslty, Request Collapsing is enabled by default for GET/HEAD requests. It is disabled otherwise.
Request collapsing for a cacheable object
In request collapsing, when many requests come in at once, the system chooses one main request (called the « champion request ») to get the data from the original source and store it in the cache. Other requests are then paused until the cache is updated with the data. This helps manage the workload efficiently.
In request collapsing, when many requests come in at once, the system chooses one main request (called the « champion request ») to get the data from the original source and store it in the cache. Other requests are then paused until the cache is updated with the data. This helps manage the workload efficiently.
PASS Called on the request
You can change how the cache system works by turning off request collapsing for certain types of requests. When this happens, all requests are sent directly to the original source instead of being combined.
PASS Called on the response
If you choose to bypass the cache when responding (using a header or setting a cookie, for example), the waiting requests are reactivated after the first response. However, this method isn’t as efficient as bypassing the cache when requesting data because we have to wait for the response to receive the final instructions.
Hash key for pass on the reponse
When you pass on a response, the hash key for that request is marked with a « PASS » value for the next 120 seconds. This means that for the next 120 seconds, all requests with the same hash key will also pass through the cache.
Response object is not cacheable but no PASS
If neither a « PASS » is called on the request nor on the response, it’s possible to receive an uncacheable response from the origin.
For example, if your server crashes and sends back a 500 error. In this case, the champion request gets a 500 response, and then the next request is triggered. Hopefully, the second one receives a response from the origin that isn’t a 500 error.
Takeaways
- For Cacheable objects, Request Collapsing collapses many backend request into one.
- PASSing on the Request disables Request Collapsing.
- A non-cacheable response, will sequentially awaken requests to the backend.
- A PASSed Response, awakens all pending requests and sends them to the backend.
- A PASSed Response triggers a
Why purging ?
If you want to make sure everyone sees the latest version of something on a website, like a new headline, you might need to « purge » or clear out the old version from the cache. It’s like refreshing a page to see the newest updates. So, if you change the headline on a webpage and you want everyone to see the change immediately, you’ll need to purge the cached version. That way, when people visit the page again, they’ll see the updated headline right away.
Different ways to purge
With Fastly, you’ve got a couple of ways to purge content from the cache.
Single URL purge
One option is to purge a single URL. In simple terms, you just point to the URL you want to refresh, and it gets cleared out from the cache. The cool part? It happens super quick! (about 150 milliseconds)
Surrogate Key Purge
Another way to purge is by using something called a Surrogate Key Purge.
This method relies on surrogate keys, which are like tags attached to articles or media files. Using this technique, you can clear out all the URLs associated with a specific tag, category, or the URLs affected by recent changes you’ve made live.
Another neat thing about this approach is that you can purge multiple URLs in one go by purging using multiple surrogate keys.
Purge All
Another option is to purge everything, which means removing all cached items under a service configuration. However, this approach comes with a risk. If you get too many requests to your origin server all at once, it could overload your server, almost like a self-inflicted DDoS attack.
How the Hash key is used to purge-all
The parameters used to generate the hash key are:
The host like : fastly.antoinebrossault.com e.g : 21357f4e1d9a
The URL : /hello e.g : 13a7bc88b63d
And a generation ID : 1 - e.g : 1
Hash for fastly.antoinebrossault.com/hello = 21357f4e1d9a-13a7bc88b63d-1
When you execute a purge-all command, the generation number is incremented by one. As a result, for the subsequent request, the hash will be different, causing a cache miss.
For example, the previous hash could be 21357f4e1d9a-13a7bc88b63d-0
, and after the purge, the hash becomes 21357f4e1d9a-13a7bc88b63d-1
.
The crucial point to grasp is that a purge-all command doesn’t actually remove the object from the cache. Instead, it alters the hash associated with the object, effectively making it unreachable.
Soft Purge
Another way to purge content is by using a soft-purge method, where instead of rendering the object unreachable by altering the hash, you mark the object as stale. This means the object can still be accessed, even though it has been invalidated.
But when might you use this? Well, consider content where serving the absolute latest version isn’t crucial, such as a logo. In this case, the stale object can still be served for a specified period that you determine.
The available options for this method are:
- Stale-While-Revalidate
- Stale-if-error
Stale-While-Revalidate
Cache-Control: max-age=3600, stale-while-revalidate=3600
When a user visits the website to view a product page, the site fetches and caches the product details to ensure fast loading times for subsequent visitors. However, product information, such as pricing, availability, or reviews, may change frequently due to updates from suppliers or customer feedback.
In this scenario, the website implements the Stale-While-Revalidate strategy. When a user requests a product page, the cached version is served immediately, even if it’s slightly outdated (stale). At the same time, the server initiates a revalidation process to fetch the latest product information from the database or external sources.
Stale-if-error
Cache-Control: max-age=3600, stale-if-error=3600
When a user visits the news website to read an article, the server fetches and caches the HTML content of the article to ensure quick loading times for subsequent visitors. However, occasional server issues or network disruptions may result in temporary errors when attempting to fetch the latest version of an article.
In this scenario, the news website implements the Stale-if-error strategy. When a user requests to read an article, the cached version of the HTML page is served immediately. If the server encounters an error while trying to fetch the latest version of the article (e.g., due to server overload or database connection issues), the website continues to serve the cached version of the article HTML instead of displaying an error message to the user.
How to purge on Fastly ?
To perform a purge, you have several options available. You can use the user interface (UI), where you can either purge a single URL, purge by surrogate key, or initiate a purge-all operation.
Purge from the UI
Purge from the API
To purge content via the API, you can use the following call:
curl -X PURGE "https://fastly.antoinebrossault.com/"
As demonstrated in this video, the initial joke is served from the cache. By default, on the origin server, each HTTP call refreshes the joke. Here, I’m manually triggering a cache refresh using the API.
This call typically doesn’t require protection, but if desired, you can activate an option that requires an API key:
curl -X POST <url> -H "Fastly-Key:<Fastly APIKey>"
In the second call, replace <url>
with the appropriate endpoint and <Fastly APIKey>
with your Fastly API key.
Purge all
Here’s the code I use in my example app to run a purge all based on service :
const axios = require('axios');
require('dotenv').config()
const axiosConfig = {
headers: {
'Fastly-Key': process.env.FASTLYAPI,
'Accept': 'application/json'
}
};
exports.purgeAll = async (serviceID) => {
try {
const response = await axios.post(
`https://api.fastly.com/service/${serviceID}/purge_all`,
{},
axiosConfig
);
console.log('Purge request sent successfully:', response.data);
return response.data; // Return the response data if needed
} catch (error) {
console.error('Error purging all:', error.response.data);
throw error; // Re-throw the error if needed
}
}
Purge by Surrogate keys
curl -X POST -H "Fastly-Key:<Fastly API Key>"
https://api.fastly.com/service/<service_D>/purge/-H
'Surrogate-Key: key_1 key_2 key_3 key_4'
Options to authorize purge
Conditionally block purge calls by:
- Require Fastly Auth (key)
- Allowlist or Blocklist IPs
- Block via Geolocation
Limits on Purging
Purge by Service
Purges are limited by Service, so any content on different services will require separate purges.
Purge-All is Rate Limited
Authenticated users API calls are rate-limited to 1000 per hour.
Tokens
Account Tokens can limit the scope of purge privileges to:
- Account
- Service
- Purge type
What’s caching ?
In a web application, caching works like a high-speed memory that stores frequently accessed data or content. When a user requests information from the application, instead of fetching it from the original source every time, the application checks if it’s already stored in the cache. If it is, the data is retrieved quickly from the cache, significantly reducing the time needed to load the page or fulfill the request. This enhances the application’s speed and responsiveness, ultimately providing a smoother user experience.
Cache Terminology Explained
Understanding key terms like HIT, MISS, and PASS is essential to grasp how caching systems operate:
HIT:
When a requested object is found in the cache, it results in a HIT. In simpler terms, this means the cache already has the data stored and can swiftly deliver it to the user, bypassing the need to retrieve it from the original source.
MISS:
Conversely, a MISS occurs when the requested object is not present in the cache. In this scenario, the caching system must retrieve the object from the origin server, leading to slightly longer response times. Objects that consistently result in MISSes may be deemed uncacheable due to their dynamic or infrequent nature.
PASS:
In certain cases, Fastly (or any caching system) may opt to bypass caching altogether for specific objects. When an object is marked for PASS, Fastly will always fetch it directly from the origin server, without attempting to store it in the cache. This ensures that the freshest version of the object is always delivered to the user, albeit at the cost of caching benefits.
Cache Metrics: Understanding Cache Hit Ratio and Cache Coverage
Cache Hit Ratio
Cache Hit Ratio: It’s the proportion of requests successfully served from the cache compared to the total requests, calculated as the number of cache hits divided by the sum of cache hits and misses. A higher ratio indicates more efficient cache usage, while a lower ratio suggests room for improvement in caching effectiveness.
HIT / (HIT + MISS)
Cache Coverage
Cache Coverage: This metric assesses the extent of the cache’s utilization by considering the total number of requests that either resulted in a cache hit or miss, divided by the sum of all requests, including those passed through without caching. In simpler terms, it measures how much of the overall workload is handled by the cache, indicating its effectiveness in caching content. A higher cache coverage implies a larger portion of requests being managed by the cache, thus maximizing its impact on performance optimization.
(HIT + MISS) / ( HIT + MISS + PASS )
How to check if a response came from the cache ?
To find out if a request hit or missed the cache, just check the response header. For instance, in my example, if the « x-cache » key shows « HIT, » it means the response was fetched from the cache.
Manage the cache duration
The cache duration is calculated by Time-To-Live (TTL).
The TTL is not a header, but you can compute the TTL based on the headers used.
The Age header keeps increasing (you can see it in the gif). When the Age header’s value becomes greater than the TTL, it means the cache expires and gets invalidated.
How to keep track on the objects stored in the cache ?
How does a CDN like Fastly keep track of what’s in the cache? Well, on each node (and there are multiple nodes on a POP – Point of Presence), there’s a hash table that does the job.
This hash table generates a key for every request, using the host and URL. Whenever a response is cacheable, it’s added to this table. So, if you’re looking for something in the cache and it’s not in this table, it means it’s not in the cache.
So the host and URL are hashed like this :
{
"5678ab57f5aaad6c57ea5f0f0c49149976ed8d364108459f7b9c5fb6de3853d6" : "somewhere/in/the/cache.html"
}
How to use headers to control the cache :
You’ve got various headers to control caching in your response, determining whether to cache it and for how long.
Cache-Control:
This header is respected by all caches, including browsers. Here are some Cache-Control values you can use:
To explain clearly, those headers : Imagine your web server is a store and your visitors are customers. You can give instructions on how fresh their groceries need to be. Here’s how to do that with headers:
Cache-Control: s-maxage=
Cache-Control: max-age= (seconds): This is like saying « keep these groceries fresh for X seconds. » Browsers will use the stored version for that long before checking for a new one.
Cache-Control: max-age=
Cache-Control: s-maxage= (seconds): This is for special big warehouses (shared caches) that store groceries for many stores. It tells those warehouses how long to keep things fresh, overriding the regular max-age for them.
Cache-Control: private: This is like saying « these groceries are for this
Cache-Control: private
This is like saying « these groceries are for this customer only, don’t share them with others. » Browsers will store them but not share them with other websites you visit.
Cache-Control: no-cache
This is like saying « don’t store these groceries at all, always check for fresh ones. » Browsers will never store this type of item.
Cache-Control: no-store
This is stricter than nocache. It’s like saying « don’t even keep a shopping list for these groceries, always get new ones. » Browsers won’t store anything related to this item.
Surrogate-Control:
Another header you can use to control the cache is the Surrogate-Control.
This header is respected by caching servers but not browsers.
The values you can set for Surrogate-Control include:
max-age= (seconds)
Server will use the stored version for that long before checking for a new one.
Expires
Expires is a header that any cache, including browsers, respects. It’s only utilized if neither « Cache-Control »
« Surrogate-Control » headers are present in the response. An example of the « Expires » header would be:
Expires: Wed, 21 Oct 2026 07:28:00 GMT
This indicates that the cached response will expire on the specified date and time.
How to not cache (PASS) ?
When you don’t want to cache something, you can use the Pass behavior. You can choose to pass on either the request or the response.
If it’s a Pass, then the response won’t be stored in the hash table we talked about earlier.
Default request that will PASS
By default, any request that isn’t a GET or HEAD method will pass.
The HEAD method is similar to the GET method in that it requests the headers of a resource from the server, but it doesn’t request the actual content of the resource. It’s often used to check if a resource exists,
const axios = require('axios');
async function makeHeadRequest() {
const response = await axios.head('https://example.com');
console.log('Headers:', response.headers);
}
With Fastly, you have the option to pass on either the request or the response. Passing on the request is faster because you immediately declare that this request should PASS, rather than needing to check with the caching system on the backend to make that determination.
When you choose to pass on the request, it means the response won’t be stored in the hashtable because it’s marked as non-cacheable.
Default response that will PASS
By default, all responses are cached.
Reponses will not be cached if your origin server adds a Cache-Control: private
header or a Set-Cookie
header to the response.
A pass response is store in the hashtable as PASS
When a response is set to pass, it means that it won’t be stored in the cache like other responses. However, Fastly still keeps track of these responses by adding them to the hash table with a special « PASS » value. This allows Fastly to quickly identify and handle such responses without going through the usual caching mechanisms.
// https://fastly.antoinebrossault.com/pass
{
"978032bc707b66394c02b71d0fa70918c2906634c0d26d22c8f19e7754285054" : "PASS"
}
Conclusion
- Fastly is another caching layer between the client and the content at origin.
- Each request generates a hash key.
- If the hash key matches with an object in the hash table, then it’s a HIT and will be served from cache.
- Some status codes will NOT cache by default.
- Cache-control headers determine the TTL and if an object will not cache.
- Using « Pass » will cause a response to NOT cache.
- You can choose to pass on the request and/or the response.
In a typical project, your web app is typically served from a single location, often the one you selected during project registration with your cloud provider—be it London, Paris, or the West Coast of the USA.
However, if your app has users worldwide, serving it from just one location is suboptimal. Despite advancements in fast networks like 5G, data cannot travel faster than the speed of light. Thus, having points closer to your users to serve your data will always be advantageous!
Distribute your app logic / around the world
Developers have long been familiar with the concept of getting closer to users by utilizing Content Delivery Networks (CDNs) to efficiently serve static files like images, JavaScript, and CSS…
However, distributing app logic poses a different challenge, as it typically remains hosted where the backend of the application resides. Consequently, for users located farther from this backend, the user experience may suffer from increased latency.
This is where edge computing emerges as a potential game changer. With edge computing, developers can execute crucial application logic at the edge of the network. Thus, beyond merely serving static files, critical aspects of application logic can be accessed and executed much faster, significantly enhancing the overall user experience.
How Edge computing works ?
To deploy your app logic at the edge, you utilize serverless functions, which are deployed and hosted at multiple points across the globe, ready to execute when needed.
If you’ve ever worked with serverless functions, you’re likely familiar with the cold start challenge. One of the key benefits of serverless functions is that you only pay for the time they’re running. However, this also means that when a function isn’t actively running, it’s « cold » and needs to be reactivated when triggered, leading to longer initial response times for users.
Given that the primary goal of edge computing is to deliver rapid response times, dealing with cold starts becomes a critical concern. To address this issue, code in edge computing environments is executed using the Chrome V8 engine. This engine, commonly known as V8, enables the execution of JavaScript code both within and outside of a browser, facilitating server-side scripting. High-performance edge computing services like Fastly rely on V8 and WebAssembly to achieve low-latency execution. WebAssembly code is closer to the machine code and requires less interpretation, leading to faster execution.
Limitation
However, there’s a limitation: the code you deploy must be compiled to be under 1MB in size to ensure optimal performance. This constraint means you can’t simply install all npm dependencies without consideration. Often, you’ll need to bundle dependencies using tools like Webpack to meet this requirement.
Fortunately, new JavaScript frameworks like Nuxt or Next are emerging, offering the capability to run parts of the logic at the edge. This trend is making more and more applications compatible with edge computing, paving the way for improved performance and responsiveness across diverse use cases.
How performant is edge computing ?
In this section of the article, we’ll examine the contrast between hosting a Node.js application in France using OVH and deploying the same application on the Fastly Edge Compute platform.
The code for the application is relatively straightforward; it generates a random quote tailored from professional golfers.
Let’s analyze the speed of both applications. For testing, I utilized WebPageTest and conducted 8 consecutive tests from an EC2 instance located in Osaka, Japan, without any traffic shaping applied.
The metric chosen to evaluate the performance of the applications is the Time To First Byte (TTFB). TTFB measures the duration it takes for the server to send the first byte of the response, providing insight into how quickly the application logic is executed.
The Node.js App hosted in France ( Graveline – North of France ) (OVH)
With a median TTFB of 698ms in this test, it’s unsurprising given the geographical distance between Paris and Osaka, which is approximately 9,200 kilometers. The latency introduced by this distance naturally contributes to the observed TTFB.
My code, hosted in France, didn’t perform well in Japan. Let’s explore its response time across various locations worldwide.
To conduct the tests, I utilized speedvitals.com, which enables testing the TTFB across different regions worldwide. It’s expected to observe a deterioration in performance as we move farther away from Europe.
Europe
Average Time To First Byte : 107ms
Americas
Average Time To First Byte : 473ms
Asia
Average Time To First Byte : 859ms
As evident in Sydney, it took over a second to receive the response. From a user perspective, this delay is exceedingly slow and undoubtedly will have a detrimental impact on site revenues.
Performance map
Unsurprisingly, as we moved farther from Europe, the performance continued to degrade.
The Node.js App hosted at the Edge with Fastly Compute
Now, let’s examine how the same code performs at the Edge. I deployed it using Fastly Compute to distribute it worldwide. Let’s review the obtained metrics. The application deployed at the Edge has a median TTFB of 144m, that is roughly 4.8 times quicker than the one hosted in France when accessed from Osaka, Japan
Now, let’s examine the response times across the globe for the application deployed on Fastly Compute.
Europe
In Europe Fastly compute is 7% faster in europe VS OVH ( 99ms VS 107ms)
Americas
In Americas Faslty compute is 84.55% faster (73ms vs 473ms )
Asia
In Asia Faslty compute is 82.22% faster (153ms vs 859ms )
Performance map
As evident from the map, the performance of the application deployed on Fastly Compute is exceptional across the globe.
Conclusion
To sum up, it’s clear that where you host your web app matters a lot, especially if you have users all around the world. Serving your app from just one place can make it slow for people who are far away. But with edge computing, things can change for the better.
Edge computing lets us run important parts of our app closer to where users are. This means faster loading times and happier users, no matter where they are. Looking at our comparison between hosting in France with OVH and using Fastly Edge Compute, it’s obvious that edge computing makes a big difference. The time it takes for the app to respond is much quicker, especially for users in far-off places like Asia.
So, in simple terms, edge computing is like a superpower for web developers. It helps us make our apps faster and more reliable for everyone, no matter where they are in the world. And as more developers start using it, we can expect even more improvements in how our apps perform.
If you want to swiftly deploy a Node.js app on Linux with Apache2 as a proxy, here’s a basic Bash script that I’ve found to be very efficient. It automates the process of creating an Apache2 configuration and restarting the server, getting a fresh SSL certificate, saving you time
You can find the Bash script below if you want to speed up the process.
#!/bin/bash
# Check if the script is run as root
if [ "$EUID" -ne 0 ]; then
echo "Please run this script as root."
exit 1
fi
# Prompt for the domain name and directory
read -p "Enter the domain name for your website (e.g., example.com): " domain
read -p "Enter the path to the Node.js application directory (e.g., /var/www/nodeapp): " app_dir
read -p "Enter the Node.js application port (e.g., 3000): " app_port
# Create the Node.js application directory
mkdir -p $app_dir
chown -R www-data:www-data $app_dir
chmod -R 755 $app_dir
# Create a new Apache configuration file
config_file="/etc/apache2/sites-available/$domain.conf"
touch $config_file
# Define the Apache VirtualHost configuration with reverse proxy
cat > $config_file <<EOL
<VirtualHost *:80>
ServerAdmin webmaster@$domain
ServerName $domain
# ProxyPass for Node.js application
ProxyPass / http://127.0.0.1:$app_port/
ProxyPassReverse / http://127.0.0.1:$app_port/
DocumentRoot $app_dir
ErrorLog \${APACHE_LOG_DIR}/$domain_error.log
CustomLog \${APACHE_LOG_DIR}/$domain_access.log combined
<Directory $app_dir>
Options Indexes FollowSymLinks
AllowOverride All
Require all granted
</Directory>
</VirtualHost>
EOL
# Enable the site configuration
a2ensite $domain
# Reload Apache to apply the changes
systemctl reload apache2
# Obtain SSL certificate and update Apache configuration with Certbot
certbot --apache --agree-tos --redirect --non-interactive --domain $domain
# Provide instructions to the user
echo "Node.js application configuration for $domain has been created and enabled."
echo "You can now start your Node.js application and configure it to listen on port $app_port."
echo "Don't forget to update your DNS records to point to this server."
Usage
Put your Node.js code on the desired folder, then start the app.
In my case the app is at :
/var/www/html/socket.antoinebrossault.com
Then :
root@somewhere:/home/debian/apacheAutomation# sudo bash newApache2NodeApp.sh
Enter the domain name for your website (e.g., example.com): socket.antoinebrossault.com
Enter the path to the Node.js application directory (e.g., /var/www/nodeapp): /var/www/html/socket.antoinebrossault.com
Enter the Node.js application port (e.g., 3000): 9825
Within the Operations Hub, there exists a powerful feature known as Custom Coded Actions, which are essentially server-side functions executable within workflows.
These code blocks or functions can be written in either JavaScript or Python. However, a significant challenge arises as not everyone possesses proficiency in these programming languages. Fortunately, with the advent of AI technologies like ChatGPT, generating code becomes feasible by providing appropriate prompts.
This article aims to guide you through the process of generating a Custom Coded Action using minimal code intervention. We’ll explore specific prompts that enable the creation of these actions without delving too deep into coding intricacies.
Disclaimer :
It’s crucial to note that while AI can generate code snippets, utilizing them effectively requires a basic understanding of coding principles. Even if an AI can generate the code, comprehending and making necessary adjustments can be challenging without some level of coding knowledge. Therefore, this article will not only showcase the generation process but also underscore the importance of comprehending and refining the code for optimal usage.
The prompt :
I’ve crafted a prompt for you—it’s a fundamental one that can be tailored to your specific requirements. While I’ve conducted some testing and found it to be effective for a basic start, feel free to refine it further based on your needs.
To provide a quick overview, this prompt outlines the constraints associated with a Custom Coded Action. If you’re unfamiliar with these constraints, I strongly recommend reading my article titled « How to Write Your First Custom Coded Action » for a comprehensive understanding.
Now, let’s delve into the prompt:
Context : serverless function developement
Help Needed : I want you to help me to write a logic with JavaScript
Constrains : the code should we written in this format :
exports.main = async (event, callback) => {
// code here
callback({
outputFields: {
}
});
}
The logic should be written inside the main function.
If the logic returns data the data should be returned by the callback function inside the outputFields object.
When you get external data input as parameters. The data comes from event.inputFields object. E.g : you get the contact email adress then the email variable should be set like :
const email = event.inputFields.email;
If you need to make HTTP request you can use the axios library.
logic to implement :
Copy the provided prompt into the ChatGPT interface and articulate your logic as you would in a coding environment. Break down your thought process into individual steps, considering each step from a coder’s perspective.
Once you’ve input the prompt, articulate your logic methodically, focusing on clarity and precision in each step. Think of it as a conversation with the AI where you guide it through your logical reasoning, ensuring a comprehensive understanding of the desired outcome.
Remember, effective communication is key, so express your logic in a way that facilitates a smooth and accurate generation of the Custom Coded Action.
Demo : detect if an email is from a free email provider
In this demonstration, I will guide you through the process of creating a Custom Coded Action that determines whether an email address is associated with a free email provider or not.
For example, addresses like antoine@gmail.com are considered free, as they can be created by anyone using services like Gmail, Yahoo, Hotmail, and so forth.
Contrastingly, an email address such as: antoine@hubspot.com would fall into a different category.
Here’s what I added to the prompt :
I want you to grab the contact email address and tell me if the domain is a free one
Like : antoine@gmail.com is a free one antoine@HubSpot.com is not a free one
Let see what chat GPT generated :
const isFreeDomain = (email) => {
// List of free email domains
const freeDomains = ['gmail.com', 'yahoo.com', 'outlook.com'];
// Extract the domain from the email address
const [, domain] = email.split('@');
// Check if the domain is in the list of free domains
return freeDomains.includes(domain.toLowerCase());
};
exports.main = async (event, callback) => {
// Get the contact email address from the inputFields
const email = event.inputFields.email;
// Check if the domain is a free one
const isFree = isFreeDomain(email);
// Prepare the outputFields object to be returned by the callback
const outputFields = {
isFreeDomain: isFree,
};
// Call the callback function with the outputFields
callback({
outputFields,
});
};
Now the only thing to make it a bit better would be to ask chat GPT to add more free domains.
Let’s do it :
Ok this is what I have now :
const isFreeDomain = (email) => {
// List of free email domains
const freeDomains = [
'gmail.com', 'yahoo.com', 'outlook.com', 'hotmail.com', 'aol.com', 'protonmail.com',
'icloud.com', 'zoho.com', 'mail.com', 'gmx.com', 'yandex.com', 'mail.ru',
'inbox.com', 'fastmail.com', 'tutanota.com', 'cock.li', 'tuta.io', 'tuta.io',
'runbox.com', 'disroot.org', 'riseup.net', 'startmail.com', 'vfemail.net',
'countermail.com', 'safe-mail.net', 'mailinator.com', 'guerrillamail.com',
'10minutemail.com', 'temp-mail.org', 'anonbox.net', 'mailnesia.com'
];
// Extract the domain from the email address
const [, domain] = email.split('@');
// Check if the domain is in the list of free domains
return freeDomains.includes(domain.toLowerCase());
};
exports.main = async (event, callback) => {
// Get the contact email address from the inputFields
const email = event.inputFields.email;
// Check if the domain is a free one
const isFree = isFreeDomain(email);
// Prepare the outputFields object to be returned by the callback
const outputFields = {
isFreeDomain: isFree,
};
// Call the callback function with the outputFields
callback({
outputFields,
});
};
What is the Docker Default Bridge Network?
When you install Docker, a default bridge network named « bridge » is automatically created. This network serves as the foundation for container networking and offers essential features for developers.
Container Isolation
The default bridge network is designed to isolate containers from the host machine’s network. This isolation ensures that containers do not interfere with each other or with the host system, providing a secure and controlled environment.
Example 1: Creating Containers on the Default Bridge Network
You can create containers attached to the default bridge network using the docker run
command. Here’s an example of launching two containers and inspecting their network settings:
# Create a container named "webapp1" running an NGINX web server
docker run -d --name webapp1 nginx
# Create another container named "webapp2" running an Apache web server
docker run -d --name webapp2 httpd
# Inspect the network settings for "webapp1"
docker network inspect bridge
In this example, both containers are connected to the default bridge network. You can find their internal IP addresses, allowing them to communicate with each other over the network.
Private Internal IP Addresses
Containers attached to the bridge network can communicate with each other over private, internal IP addresses. Docker assigns each container a unique IP address within the subnet of the bridge network. This IP address allows containers to interact with each other using standard networking protocols.
Example 2: Container Communication Using Container Names
The default bridge network provides DNS resolution for containers based on their names. Let’s see how you can use this to enable one container to communicate with another:
# Create a container named "webapp3" running a simple web server
docker run -d --name webapp3 -p 8080:80 nginx
# Create a container named "client" to send requests to "webapp3"
docker run -it --name client alpine sh
# Inside the "client" container, use the container name to access "webapp3"
wget http://webapp3
# You can also use the internal IP address
wget http://172.17.0.2
In this example, the « client » container communicates with the « webapp3 » container by using its container name. This DNS resolution simplifies inter-container communication.
External Connectivity via NAT
Although containers on the default bridge network are isolated, they can access the external world through Network Address Translation (NAT). Docker sets up NAT rules, allowing containers to initiate outbound connections.
Example 3: External Connectivity via NAT
Containers on the default bridge network can access external resources. Here’s an example of how a container can access external websites:
# Create a container that accesses an external website
docker run -it --name external-access alpine sh
# Inside the "external-access" container, try accessing a website
wget https://www.google.com
The « external-access » container can access external websites as Docker sets up NAT rules for outbound connections.
Port Mapping
To allow external access to a container service, you can use port mapping.
Example 4: Port Mapping
Here’s an example of exposing a containerized web service to the host’s network:
# Create a container named "webapp4" running an NGINX web server and map port 8080 to 80
docker run -d --name webapp4 -p 8080:80 nginx
Now, you can access the NGINX web server from your host machine at « ` http://localhost:8080« `
In conclusion, the Docker Default Bridge Network is a foundational element for container networking. Developers can leverage it to build isolated, interconnected containerized applications and gain a deep understanding of networking concepts that are vital when working with Docker.
Concept
The concept is pretty simple, in a WorkFlow you send the person email address to the PeopleDataLabs’ API. If there’s a match in their record we update the contact’s information.
See how it works / and implement it
Set the custom code block
PeopleDataLabs API key
You need to get an API key on the PeopleDataLabs website’s.
You need to set the peopleDataLabsAPI
key in the secret, the API key has to be set in the secret section of the Custom Coded Action. Use the name peopleDataLabsAPI
Set the email variable
Set the variable name email
in the property to include in code.
Your setup should look like this :
The code ready to implement
Here’s the JavaScript code you can copy paste to the custom code block.
const axios = require('axios');
exports.main = async (event, callback) => {
if (!process.env.peopleDataLabsAPI) throw new Error('The peopleDataLabs API key has to be set in the secret section');
if (process.env.peopleDataLabsAPI.trim() === '') throw new Error(`The peopleDataLabs API key can't be empty`);
const email = event.inputFields.email;
if (!email) throw new Error('email is not set, are you sure you put email in the "properties to include in code" ? ');
const personInfos = await getPersonInfos(email).catch(axiosErrorHandler)
if (!personInfos.data) throw new Error(`We couldn't grab your email infos`);
if (personInfos.data.status === 404) throw new Error(`The API query worked but we didnd't find any match`);
if (personInfos.data.status !== 200) throw new Error(`The API query worked but didn't return a 200 status code instead we got ${personInfos.data.status}`);
if (personInfos.data.total < 1) throw new Error(`The API query worked but no result has been returned`);
if (!personInfos.data.data) throw new Error(`The API query worked but there's no data`);
if (personInfos.data.data.length === 0) throw new Error(`The API query worked but no result has been returned, the data array is empty`);
const {
full_name,
first_name,
middle_initial,
middle_name,
last_initial,
last_name,
gender,
birth_year,
birth_date,
linkedin_url,
linkedin_username,
linkedin_id,
facebook_url,
facebook_username,
facebook_id,
twitter_url,
twitter_username,
github_url,
github_username,
work_email,
personal_emails,
recommended_personal_email,
mobile_phone,
industry,
job_title,
job_title_role,
job_title_sub_role,
job_title_levels,
job_onet_code,
job_onet_major_group,
job_onet_minor_group,
job_onet_broad_occupation,
job_onet_specific_occupation,
job_onet_specific_occupation_detail,
job_company_id,
job_company_name,
job_company_website,
job_company_size,
job_company_founded,
job_company_industry,
job_company_linkedin_url,
job_company_linkedin_id,
job_company_facebook_url,
job_company_twitter_url,
job_company_location_name,
job_company_location_locality,
job_company_location_metro,
job_company_location_region,
job_company_location_geo,
job_company_location_street_address,
job_company_location_address_line_2,
job_company_location_postal_code,
job_company_location_country,
job_company_location_continent,
job_last_updated,
job_start_date,
location_name,
location_locality,
location_metro,
location_region,
location_country,
location_continent,
location_street_address,
location_address_line_2,
location_postal_code,
location_geo,
location_last_updated,
phone_numbers,
emails,
interests,
skills,
location_names,
regions,
countries,
street_addresses,
experience,
education,
profiles,
version_status
} = personInfos.data.data[0];
const personal_email = Array.isArray(personal_emails) ? personal_emails[0] : null
const phone_number = Array.isArray(phone_numbers) ? phone_numbers[0] : null
callback({
outputFields: {
full_name,
first_name,
middle_initial,
middle_name,
last_initial,
last_name,
gender,
birth_year,
birth_date,
linkedin_url,
linkedin_username,
linkedin_id,
facebook_url,
facebook_username,
facebook_id,
twitter_url,
twitter_username,
github_url,
github_username,
work_email,
personal_email,
recommended_personal_email,
mobile_phone,
industry,
job_title,
job_title_role,
job_title_sub_role,
job_title_levels,
job_onet_code,
job_onet_major_group,
job_onet_minor_group,
job_onet_broad_occupation,
job_onet_specific_occupation,
job_onet_specific_occupation_detail,
job_company_id,
job_company_name,
job_company_website,
job_company_size,
job_company_founded,
job_company_industry,
job_company_linkedin_url,
job_company_linkedin_id,
job_company_facebook_url,
job_company_twitter_url,
job_company_location_name,
job_company_location_locality,
job_company_location_metro,
job_company_location_region,
job_company_location_geo,
job_company_location_street_address,
job_company_location_address_line_2,
job_company_location_postal_code,
job_company_location_country,
job_company_location_continent,
job_last_updated,
job_start_date,
location_name,
location_locality,
location_metro,
location_region,
location_country,
location_continent,
location_street_address,
location_address_line_2,
location_postal_code,
location_geo,
location_last_updated,
phone_number
}
});
}
/**
* Checks if a given email address is valid.
*
* @param {string} email - The email address to be validated.
* @returns {boolean} True if the email is valid, false otherwise.
*
* @example
* const isEmailValid = isValidEmail("user@example.com");
* // Returns true
*/
const isValidEmail = (email) => /^[^\s@]+@[^\s@]+\.[^\s@]+$/.test(email);
/**
* Gets the personal information of a person from People Data Labs, given their email address.
*
* @param email The email address of the person to get information for.
* @throws {Error} If the email parameter is not a valid string or is empty.
* @returns {Promise<axios.Response>} A promise that resolves to an axios response object containing the person's information.
*/
const getPersonInfos = async (email) => {
if (typeof email !== 'string' || email.trim() === '') throw new Error('Invalid email parameter. It must be a non-empty string.');
if (!isValidEmail(email)) throw new Error('Not a valid email passed as a parameter of the getPersonInfos() function ');
const endpoint = 'https://api.peopledatalabs.com/v5/person/search';
const params = {
"dataset": "email",
"size": 1,
"sql": `SELECT * FROM person WHERE (emails.address = '${email}' )`,
pretty: true,
};
const config = {
headers: {
'X-Api-Key': process.env.peopleDataLabsAPI,
},
params,
};
return axios.get(endpoint, config)
}
/**
* Handles errors thrown by axios requests and logs relevant information.
*
* @param {Error} error - The error object thrown by axios.
*/
const axiosErrorHandler = error => {
if (error.response) {
// The request was made and the server responded with a status code
// that falls out of the range of 2xx
console.log(error.response.data);
console.log(error.response.status);
console.log(error.response.headers);
} else if (error.request) {
// The request was made but no response was received
// `error.request` is an instance of XMLHttpRequest in the browser
// and an instance of http.ClientRequest in node.js
console.log(error.request);
} else {
// Something happened in setting up the request that triggered an Error
console.log('Error', error.message);
}
}
Output
The output should look like this :
Create a data flow, where the data comes and goes
I definitely recomend to create a data flow sketch or schema which is vital for maintaining a clear perspective on how data enters and exits your systems. It serves as a visual roadmap, offering a comprehensive understanding of the data’s journey. This clarity enables efficient error detection, ensures security and compliance,streamlines processes,and facilitates smooth change management when systems or processes evolve.
From HubSpot to your app
To achieve a connection bewtten HubSpot and your app we will the Workflows which serve as the automation powerhouse within the CRM. They allow you to set up a sequence of actions triggered by specific events or conditions. In essence, this proactive solution enables you to orchestrate actions based on predefined triggers, creating a seamless and efficient system of automated responses.
Create data in your app from HubSpot
Hover the video to play the animation
In the depicted schema, when a contact is added to HubSpot, it initiates a Workflow. Within this Workflow, a webhook block or a custom code block is executed. These blocks facilitate an API call to our application’s API, allowing for the seamless insertion of the contact’s information into our own system. This automated process ensures that data synchronization between HubSpot and our application is efficient and accurate.
Demo
1 – Create a WorkFlow
The WorkFlow you have to create should be based on the object you want to create in your app. E.g : if you want to send an HubSpot contact in your app create a WorkFlow based on contact.
2 – Create a filter
You may need to perform this action only for some of your contacts which match specific criteria. If it’s the case, then set the filters accordingly.
3 – Choose between a Custom Code block and a webhook
Webhook :
You can choose a webhook block to achieve this connection :
- If you have to execute a
POST
request ( a webhook can runPOST
orGET
requests only ) - If you don’t have to match a specific payload, then you can use a webhook. The payload HubSpot will send to your endpoint will look like this ( for contact). With a webhook, you will have to parse the request on the app.
Custom Code :
If you have to match a specific payload (aligned with the data your API expect) then a custom code is a better solution.
In this example, my API requires a POST
For this endpoint :
POST https://partner-app.antoinebrossault.com/api/user
With this JSON Body :
{
"name": "John",
"lastname": "Doe",
"email": "johndoe@example.com",
"tokensAvailable": 100,
"carManufacturer": "Toyota",
"carModel": "Camry"
}
And with an autorization header set
authorization: `Bearer <myAuthorizationToken>`
My code looks like this :
const axios = require('axios');
const axiosConfig = {
headers: {
authorization: `Bearer ${process.env.myAPIsecret}`
}
};
exports.main = async (event, callback) => {
const tokensAvailable = parseInt(event.inputFields['tokensAvailable']);
const carManufacturer = event.inputFields['carManufacturer'];
const carModel = event.inputFields['carModel'];
const name = event.inputFields['name'];
const lastname = event.inputFields['lastname'];
const email = event.inputFields['email'];
const dataToSend = {
name,
lastname,
email,
tokensAvailable,
carManufacturer,
carModel
};
let userToCreate = null;
try {
userToCreate = await axios.post(`https://partner-app.antoinebrossault.com/api/user`, dataToSend, axiosConfig).catch();
if (!userToCreate.data) throw new Error(`We failed to create the user for ${event.inputFields['email']}... 😬`);
} catch (error) {
console.log(`error ${error}`)
}
callback({
outputFields: {
changes: userToCreate.data.changes,
lastID: userToCreate.data.lastID
}
});
}
Read data in your app from HubSpot
Hover the video to play the animation
To read data in your app from HubSpot, you can create a WorkFlow, and in that WorkFlow run an API call to get the data. That API call can be done with a webhook block or a Custom Code block.
Demo
1 – Create a WorkFlow
The WorkFlow you have to create should be based on the object you want to enrich.
2 – Create a filter
You may need to perform this action only for some of your contacts which match specific criterias. If it’s the case, then set the filters accordingly.
3 – Choose between a Custom Code block and a webhook
Webhook :
If your API endpoint requires a GET
or a POST
and contains query parameters, then you can use a webhook.
https://partner-app.antoinebrossault.com/api/user/?email=carey85@gmail.com
✅ This endpoint can be used in a webhook as the email parameter is a query parameter email=carey85@gmail.com
https://partner-app.antoinebrossault.com/api/user/carey85@gmail.com
❌ At the opposite, this endpoint can’t be used in a webhook block, because the parameter email carey85@gmail.com
is not passed as a query parameter
If your endpoint is not compatible with a webhook, don’t worry, just use a Custom Code block.
Custom code :
The pro of a Custom Code is it’s flexibility, there’s no API a Custom Code can’t call.
Here’s a Custom Code which calls the same endpoint used above.
// Import the Axios library for making HTTP requests
const axios = require('axios');
exports.main = async (event, callback) => {
// Extract the 'email' field from the 'event' parameter
const email = event.inputFields.email;
// Use Axios to make an asynchronous HTTP GET request to retrieve contact information
const contactInfos = await axios.get(`https://partner-app.antoinebrossault.com/api/user/${email}`);
// Check if the 'contactInfos' response data is empty, and if so, throw an error
if (!contactInfos.data) throw new Error(`We failed to get infos for ${email}... 😬`);
// Log the retrieved 'contactInfos' data to the console
console.log(contactInfos.data)
// Call the 'callback' function to return the result of the API call to the WorkFlow
callback({
outputFields: {
// Map specific properties from 'contactInfos' data to output fields
"tokensAvailable": contactInfos.data.tokensAvailable,
"carManufacturer": contactInfos.data.carManufacturer,
"carModel": contactInfos.data.carModel,
"avatar": contactInfos.data.avatar
}
});
}
This code doesn’t use an endpoint with query parameters, as we call this endpoint :
const contactInfos = await axios.get(`https://partner-app.antoinebrossault.com/api/user/${email}`);
The endpoint in my code contains the variable email, ${email}
then when the call is executed the URL is :
https://partner-app.antoinebrossault.com/api/user/carey85@gmail.com
Assuming carey85@gmail.com
is the email address of the contact enrolled the Workflow.
Update data in your app from HubSpot
Hover the video to play the animation
If you want to update data in your app in reaction to an event in HubSpot, there’s high chances you will need to perform a PATCH
request. To do so, your only option is to use a Custom Code block inside a Workflow.
In my own Application I need to perform a PATCH
request like so :
PATCH https://partner-app.antoinebrossault.com/api/user
With this JSON Body :
{
"name": "John",
"lastname": "Doe",
"email": "johndoe@example.com",
"tokensAvailable": 100,
"carManufacturer": "Toyota",
"carModel": "Camry"
}
And with an autorization header set
authorization: `Bearer <myAuthorizationToken>`
Demo
1 – Create a WorkFlow
The WorkFlow you have to create should be based on the object you want to update, and the re-enrollement should be activated in the WorkFlow.
2 – Create a filter
You may need to perform this action only for some of your contacts which match specific criteria. If it’s the case, then set the filters accordingly.
3 – Add a custom code block
// Import the Axios library for making HTTP requests
const axios = require('axios');
// Configure the Axios request headers with an authorization token
const axiosConfig = {
headers: {
authorization: `Bearer ${process.env.myAPIsecret}`
}
};
// Export an asynchronous function named 'main' that takes 'event' and 'callback' parameters
exports.main = async (event, callback) => {
// Extract and parse specific input fields from the 'event' parameter
const tokensAvailable = parseInt(event.inputFields['tokensAvailable']);
const carManufacturer = event.inputFields['carManufacturer'];
const carModel = event.inputFields['carModel'];
const name = event.inputFields['name'];
const lastname = event.inputFields['lastname'];
const email = event.inputFields['email'];
// Create an object 'dataToSend' with the extracted input fields and add a 'fromHs' property
const dataToSend = {
name,
lastname,
email,
tokensAvailable,
carManufacturer,
carModel,
fromHs: true
};
// Log a message indicating the intention to update user data and the content of 'dataToSend'
console.log(`Let's update ${email} with ${JSON.stringify(dataToSend)}`);
// Perform an HTTP PATCH request to update the user data
const res = await axios.patch('https://partner-app.antoinebrossault.com/api/user', dataToSend, axiosConfig).catch(axiosErrorHandler);
// Check if the 'res' response data is empty, and if so, throw an error
if (!res.data) throw new Error(`We failed to update infos for ${event.inputFields['email']}... 😬`);
// Call the 'callback' function with an object containing output fields
callback({
outputFields: {
changes: res.data.changes
}
});
}
/**
* Handles errors thrown by axios requests and logs relevant information.
*
* @param {Error} error - The error object thrown by axios.
*/
const axiosErrorHandler = error => {
if (error.response) {
// The request was made and the server responded with a status code
// that falls out of the range of 2xx. Log response data, status, and headers.
console.log(error.response.data);
console.log(error.response.status);
console.log(error.response.headers);
} else if (error.request) {
// The request was made but no response was received. Log the request object.
console.log(error.request);
} else {
// Something happened in setting up the request that triggered an Error. Log the error message.
console.log('Error', error.message);
}
}
From your app to HubSpot
The first step in this process is defining the event that triggers the data transfer to HubSpot. In your case, you want to initiate this process when a contact is created in your application. This event could also be a user registration, a purchase, or any other activity that’s meaningful to your business.
Once the event is detected in your application, you need to collect the relevant data about the contact. This typically includes their name, email address, company information, and any other data that is important to you.
After collecting the necessary data, it’s time to send it to HubSpot. This is done by making an API call to HubSpot’s API. You’ll need to use the HubSpot API endpoint for creating or updating contacts. This typically involves sending a POST or a PATCH request to a specific URL with the contact data in a structured format (JSON).
Create data in HubSpot from your APP
In order to synchronize your App with HubSpot when data is created you need to call the HubSpot API and send the data to HubSpot.
Demo
Update data HubSpot from your APP
To run an update it’s the exact same concept, but instead of calling the API endpoint do create, we use the endpoint do update.
Demo
Use the new HubSpot WorkFlow triggers
There’s a new feature in HubSpot Workflows: « Trigger workflows from webhooks (Operations Hub Professional and Enterprise only). »
With this feature, you can initiate a workflow by making a call to a URL (webhook).
To activate the workflow, you need to execute a POST request to that URL and can include data in the body in JSON format.
This data can then be utilized within the workflow to perform various tasks, such as creating a record, updating an existing record, and more.
Code example used in the demo :
const axios = require('axios');
(async () => {
const endpoint = "https://api-na1.hubapi.com/automation/v4/webhook-triggers/21169044/TXcgqlT"
await axios.post(endpoint,{
hello: true,
orderNum : 32322,
clientEmail : "antoinebrossault@gmail.com",
items: [
{
name: 'Wheel',
price : 34,
qty : 2
},
{
name: 'Engine',
price : 8000,
qty : 1
}
]
})
})();
Demo
In the upcoming video, we’ll start by explaining the fundamental concept behind the Visitor Identification API. Then, we’ll walk you through the code that brings this concept to life. Whether you’re an experienced developer or just getting started, this video will provide valuable insights into leveraging the full potential of HubSpot’s Visitor Identification API
The visitor identification API doc
Schema
Prerequisites
Before you get started, ensure that you meet the following prerequisites:
- You must have an existing web application with an authentication system.
- Set up a private app in HubSpot.
- Your HubSpot account should have a qualifying Professional or Enterprise subscription.
Integration Steps
1. User Authentication
Begin by allowing users to log in with your current login form on your website. Ensure that your authentication system is in place.
As an example, the auth form could look like this :
This HTML form with the id « login » contains an email input field where users can enter their email address. When they click the « Send » button, the form will be submitted.
<form id="login">
<input type="email" id="email" >
<input type="password" id="password" >
<button type="submit"> Login </button>
</form>
2. Generate an Identification Token
To generate an identification token, you will need to use the Visitor Identification API. This should be done on the backend of your web application. You can pass in the email address of the authenticated visitor.
Here’s the router which handldes the POST
request to login the user :
fastify.post("/login", async function(request, reply) {
const user = {
email: request.body.email,
};
const authResult = await hubSpotAPI.authVisitor(user.email);
console.log(authResult);
reply.send({token:authResult});
});
As you can see I call a function I created :
const authResult = await hubSpotAPI.authVisitor(user.email);
The function looks like this :
In this function I call ( POST
) the HubSpot API on the following endpoint
curl --request POST \
--url https://api.hubapi.com/conversations/v3/visitor-identification/tokens/create \
--header 'authorization: Bearer YOUR_ACCESS_TOKEN' \
--header 'content-type: application/json' \
--data '{
"email": "visitor-email@example.com",
"firstName": "Gob",
"lastName": "Bluth"
}'
NB : you need to use your private app token to make this call.
exports.authVisitor = async (email) => {
if (!email) throw new Error('you need to set an email ');
const url = 'https://api.hubspot.com/conversations/v3/visitor-identification/tokens/create';
const postData = {
email
};
const response = await axios.post(url, postData, axiosConfig).catch(axiosErrorHandler)
if(!response) throw new Error(`API didn't respond...`)
if(!response.data) throw new Error(`API didn't respond with data ...`)
if(!response.data.token) throw new Error(`API didn't respond with a token ...`)
return response.data.token;
}
Once I get the token I return the token to my controller, to pass it to the front-end.
3. Set Properties in hsConversationsSettings
Using the token generated in Step 2, you should set specific properties on the hsConversationsSettings
object on the window. These properties include the identificationEmail
and identificationToken
. Here’s an example:
window.hsConversationsSettings = {
identificationEmail: "visitor-email@example.com",
identificationToken: "<TOKEN FROM STEP 2>"
};
My JavaScript code executed for the login and the auth with HubSpot looks like this
document.querySelector('#login')?.addEventListener("submit", async (event) => {
event.preventDefault();
const emailInput = document.querySelector('#email');
try {
const response = await fetch("/login", {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({
email : emailInput.value
})
});
if (!response.ok) throw new Error(`Request failed with status: ${response.status}`);
const data = await response.json();
if(!data.token) throw new Error(`No token received by the backend`);
window.hsConversationsSettings = {
identificationEmail: emailInput.value,
identificationToken: data.token
};
window.HubSpotConversations.widget.load();
window.location.replace("/mytickets/");
} catch (error) {
console.error('An error occurred:', error);
}
}
The important part is :
In this part, I use the token to set the right cookie in the user browser. Thanks to this cookie the user will not have to put his email address in the forms / chat
window.hsConversationsSettings = {
identificationEmail: emailInput.value,
identificationToken: data.token
};
window.HubSpotConversations.widget.load();
Quick explanation
In this video, I dive into a game-changing solution for efficient user assignment management in HubSpot. Discover how my custom coded action automates user assignments, ensuring tasks are handled seamlessly, even during peak vacation periods. Say goodbye to the complexity of traditional if/then branches and hello to a more efficient CRM workflow.
Comprehensive explanation of the logic
Join me in this comprehensive video as I deep dive into the intricacies of user assignment management within HubSpot CRM. I explore the challenges CRM managers face with traditional if/then branches and unveil my custom coded action. Learn how this solution not only automates assignments but also enhances scalability, reduces errors, and ensures consistent, fair practices. Say goodbye to manual complexities and hello to a more efficient and effective CRM workflow. Watch to discover the future of user assignment management.
The Challenge of Traditional Workflow Branches
Managing user assignments in HubSpot CRM using traditional if/then branches in a workflow can become a complex and time-consuming endeavor. Here’s why:
- Complexity: With a growing number of users, the complexity of creating and managing these conditional branches increases exponentially. It can become a tangled web of conditions and exceptions, making it challenging to maintain and troubleshoot.
-
Resource-Intensive: These traditional workflows require substantial resources in terms of time and human effort. CRM managers need to constantly update and monitor the conditions to ensure that they align with the ever-changing user availability.
-
Scalability Issues: As your business grows and more users are added to the CRM system, the traditional if/then branches approach becomes less scalable. It’s not an efficient way to handle assignments in a dynamic environment.
-
Risk of Overlooking Assignments: In busy periods or during employee vacations, there’s a higher risk of overlooking assignments or assigning tasks to users who are out of the office. This can lead to delays and customer dissatisfaction.
The Advantages of this Custom Coded Action
Now, let’s discuss why Operations Hub pro solution is a game-changer in managing user assignments:
- Automation: The custom solution automates the assignment process, reducing the need for manual intervention. It checks user availability and makes assignments accordingly, even during peak vacation periods.
-
Efficiency: The solution is highly efficient, ensuring that every task is assigned promptly and to an available user. It eliminates the time-consuming process of manually verifying each user’s status.
-
Scalability: This custom solution is highly scalable. It doesn’t matter if you have a dozen users or hundreds; it can handle the assignment process effectively.
-
Error Reduction: With automated checks and backups in place, the risk of overlooking assignments or assigning tasks to unavailable users is drastically reduced. This leads to improved task management and enhanced customer satisfaction.
-
Flexibility: The system is adaptable. It can handle different scenarios, such as identifying a backup user when the primary user is out of the office or selecting an available team member if needed.
-
Time and Resource Savings: By automating the assignment process, CRM managers save a significant amount of time and effort that can be redirected toward more strategic tasks.
-
Consistency: The solution ensures consistent and fair assignment practices. There’s no room for bias or oversight in the process.
In conclusion, this custom code solution offers a superior approach to managing user assignments in HubSpot CRM, particularly when dealing with many users. Its automation, efficiency, scalability, and error reduction capabilities make it a valuable tool in streamlining your CRM workflow. By eliminating the complexities and resource demands associated with traditional if/then branches, it provides a reliable and consistent method for managing user assignments and ultimately improving customer relationship management.
How to implement ?
You will need :
- A private App Token with read and write access to the object you need to manage. The private app also needs to read the teams
- An access key to the Google spreadsheet API
- Operations Hub Pro
Create a Workflow based on the object you want to re-assign
In this example, I’m going to use a ticket based WorkFlow. Create a WorkFlow based on ticket with the filter : ticket owner is known
.
It should look like this :
Copy the following spreadsheet
( Click on the picture to open the spreadsheet )
Create API access for the Google Sheet API
To be able to connect the spreadsheet with HubSpot you need Google Sheet API tokens. Here’s how to get a key :
Go on the Google spreadsheet API page
Click on activate
Click on credentials
Click on Create Credentials
Click on Service account
Enter the name you want in Service Account Name
Click on the service account you just created
Click Add Key
Click Create new Key
Select JSON
and click create, the key will be downloaded to your computer.
Get the data from the key
To open a JSON file manually and copy the values for the privateKey
and client_email
keys without using a code editor, you can use a text editor or a viewer that allows you to open and read JSON files. Here’s how you can do it manually:
Using a Text Editor (e.g., Notepad on Windows or TextEdit on macOS):
- Locate the JSON you just downloaded to open.
- Right-click on the file and choose « Open with » or « Open » (depending on your operating system).
- Select a text editor from the list of available programs (e.g., Notepad on Windows or TextEdit on macOS).
- The JSON file will open in the text editor.
- Manually locate the « privateKey » and « client_email » keys in the JSON file, and copy their corresponding values.
Using a Web Browser (e.g., Chrome):
- Open Google Chrome.
- Drag and drop the JSON file into a new tab in your Chrome browser.
- Chrome will display the JSON file in a readable format.
- Manually locate the « privateKey » and « client_email » keys in the displayed JSON, and copy their corresponding values.
So you should end-up with something like this :
"-----BEGIN PRIVATE KEY-----\nMIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDiNjl5cU33NjqJ\nb/PHMkE9EGd0HYfBuOe65yis4xJZtIf8tiEuA0kVb5uApE4P5cS9RPli1rr8zOPP\nNWolFqVhaIKeZQRoq2d3m73ea99AHOFXtg/zajAEOSe1MNMIVQc3jCQXEAN/RMua\n1ADgaYLyhLs2fhrxEg42Ld4GUWmSbX3uGMvio9JxEYWtJIuhPJCNpd5bgQd86D9M\nzF2EimVtYnXGon4i+w7YafRX2c7CooAEt00tM/jQgMSU1D2gr30bOfJJMDZ1QwIq\n9O473tIDXy3YTusm/yRSss0o/xm8Va9CXqQ3IBqytSev1ufXaQbo8+TxJ+Ospp4c\nURSyuLm7AgMBAAECggEAYpd2kk8HR5UriyYvjiSBmY8yP8H1HsIrwWKpcDyhjVZp\nJIPuzyKgckbL9BPob/ZZOpK6zNDA+5iDO5bQtex6VQubTlTByKrX9CH9bVj/mu5b\naoKPziv8VILiow5uE8YXWKbsPd79XzVJNihrX2OtLm0aOWRZ8rLHXea8y20lQatE\nP1uhWDgwYPJm9r5sjszALzNlETivrbxOVxpso+hqybj370eJzblLgXadwBOhdx3u\nv3UGzfX9i/qVnY5ywEZUd7G/Tmrx97LC5y2pCYOr7p1wdVt4ukdgsAkG59VaQ3WC\n9y8Pc5Yg3n1Y+8Hp+ih3b/IkMurdiT6l2UGcv8ZkJQKBgQDyLMSIRX2NkBQ02nPO\nVFvV5HnXYH/z4D6Uw32PHb3hkHhM7Dku8BiG+emeto1cVDyfkMxliwUXtCDBn6X8\n5ZRg0Ru7oPj/qPMcBpWD4hqdqf5N+IQ+v7lW+8uE8Eaz8WbpgJBpadEQRKS6NYnF\napVX0cYteqBlYRE3I/Ph5/IdfwKBgQDvICq0CByrWs9CH4tyxU76EiPR2Ggfqs5C\nYlUD3BzjkCmBSc1H2yww/SygnzhrgWiGwdkzmYSrDvBo6eoM0E0sO8tGTwA2WUNZ\nwRTL20ErPsLPucFqGMBZ2YHis0MXW/PGNBO6ZADX67iE8xXOk27tQ5wB+gIlOHgt\n4CE9RAl5xQKBgF3mU9HOt+7i1aLkrRBsjysxKrkC9rnV0g4WeqG6U3yZarvQwB9e\nAvSbBCWA/PC2zMbF+yrIK5JUSnso7tBPKCgeDFXFBacDmDfeqax4R/+oAS20VXqL\nFk8O1IvYKmHtEQ0qx1PILsLTCtgUmDXOrNdfRCswJ+8HIwixTQfjynH7AoGAYJth\nRTylwIC+jRtLbkHSh2s+t2+zmV+bVux9JkMOFM3QRuB3I9mjP+N43SeWVrCAdzjn\ntFYIaEdvzyL5oNWi6AT8Odp+3nYvpJpB+Z4J9Ru0/tEwF9oKFAKw29LKfyxyDxhJ\nBBuUz6b29Bd1LvEXdpnC9HV52mm2++m55BORtHUCgYEAwcXk0P0WqcgEOnuoR7xQ\nEwsjskxqf5Zs/NL2clleJ1qnlrkFqZUKP8J0agTGTjCoR6qPJ15fv9afBDAaCXRw\nr7OIJc0BEJgTeicu6JrXUhk7bsLW9DqtY3GHKHkkZAQgoryBDWD5jeIoiLWBrFVo\nOef8OpRPaC5A/7Me+9prkGs=\n-----END PRIVATE KEY-----\n"
"ooo-hubspot-management@grounded-vista-401120.iam.gserviceaccount.com"
Once you have those the key and the email address. Create a custom code block in the WorkFlow
Create a custom code block to store our credentials
In that Custom Coded Action block paste the following code :
const sheetId = "";
const privateKey = "";
const clientEmail = ""
exports.main = async (event, callback) => {
if(!sheetId) throw new Error(`sheetId has to be set !`);
if(sheetId === "") throw new Error(`sheetId can't be empty !`);
if(!privateKey) throw new Error(`privateKey has to be set !`);
if(privateKey === "") throw new Error(`privateKey can't be empty !`);
if(!clientEmail) throw new Error(`clientEmail has to be set !`);
if(clientEmail === "") throw new Error(`clientEmail can't be empty !`);
callback({
outputFields: {
sheetId,
privateKey,
clientEmail
}
});
}
In this code we are going to set our credentials.
Follow the video to see how to create this block
So you should end-up with this :
const sheetId = "1FtQiR3k1nYSW2oiVenq0UwdSCRJauf_Q3gT9vkyeH1k";
const privateKey = "-----BEGIN PRIVATE KEY-----\nMIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDiNjl5cU33NjqJ\nb/PHMkE9EGd0HYfBuOe65yis4xJZtIf8tiEuA0kVb5uApE4P5cS9RPli1rr8zOPP\nNWolFqVhaIKeZQRoq2d3m73ea99AHOFXtg/zajAEOSe1MNMIVQc3jCQXEAN/RMua\n1ADgaYLyhLs2fhrxEg42Ld4GUWmSbX3uGMvio9JxEYWtJIuhPJCNpd5bgQd86D9M\nzF2EimVtYnXGon4i+w7YafRX2c7CooAEt00tM/jQgMSU1D2gr30bOfJJMDZ1QwIq\n9O473tIDXy3YTusm/yRSss0o/xm8Va9CXqQ3IBqytSev1ufXaQbo8+TxJ+Ospp4c\nURSyuLm7AgMBAAECggEAYpd2kk8HR5UriyYvjiSBmY8yP8H1HsIrwWKpcDyhjVZp\nJIPuzyKgckbL9BPob/ZZOpK6zNDA+5iDO5bQtex6VQubTlTByKrX9CH9bVj/mu5b\naoKPziv8VILiow5uE8YXWKbsPd79XzVJNihrX2OtLm0aOWRZ8rLHXea8y20lQatE\nP1uhWDgwYPJm9r5sjszALzNlETivrbxOVxpso+hqybj370eJzblLgXadwBOhdx3u\nv3UGzfX9i/qVnY5ywEZUd7G/Tmrx97LC5y2pCYOr7p1wdVt4ukdgsAkG59VaQ3WC\n9y8Pc5Yg3n1Y+8Hp+ih3b/IkMurdiT6l2UGcv8ZkJQKBgQDyLMSIRX2NkBQ02nPO\nVFvV5HnXYH/z4D6Uw32PHb3hkHhM7Dku8BiG+emeto1cVDyfkMxliwUXtCDBn6X8\n5ZRg0Ru7oPj/qPMcBpWD4hqdqf5N+IQ+v7lW+8uE8Eaz8WbpgJBpadEQRKS6NYnF\napVX0cYteqBlYRE3I/Ph5/IdfwKBgQDvICq0CByrWs9CH4tyxU76EiPR2Ggfqs5C\nYlUD3BzjkCmBSc1H2yww/SygnzhrgWiGwdkzmYSrDvBo6eoM0E0sO8tGTwA2WUNZ\nwRTL20ErPsLPucFqGMBZ2YHis0MXW/PGNBO6ZADX67iE8xXOk27tQ5wB+gIlOHgt\n4CE9RAl5xQKBgF3mU9HOt+7i1aLkrRBsjysxKrkC9rnV0g4WeqG6U3yZarvQwB9e\nAvSbBCWA/PC2zMbF+yrIK5JUSnso7tBPKCgeDFXFBacDmDfeqax4R/+oAS20VXqL\nFk8O1IvYKmHtEQ0qx1PILsLTCtgUmDXOrNdfRCswJ+8HIwixTQfjynH7AoGAYJth\nRTylwIC+jRtLbkHSh2s+t2+zmV+bVux9JkMOFM3QRuB3I9mjP+N43SeWVrCAdzjn\ntFYIaEdvzyL5oNWi6AT8Odp+3nYvpJpB+Z4J9Ru0/tEwF9oKFAKw29LKfyxyDxhJ\nBBuUz6b29Bd1LvEXdpnC9HV52mm2++m55BORtHUCgYEAwcXk0P0WqcgEOnuoR7xQ\nEwsjskxqf5Zs/NL2clleJ1qnlrkFqZUKP8J0agTGTjCoR6qPJ15fv9afBDAaCXRw\nr7OIJc0BEJgTeicu6JrXUhk7bsLW9DqtY3GHKHkkZAQgoryBDWD5jeIoiLWBrFVo\nOef8OpRPaC5A/7Me+9prkGs=\n-----END PRIVATE KEY-----\n";
const clientEmail = "ooo-hubspot-management@grounded-vista-401120.iam.gserviceaccount.com";
exports.main = async (event, callback) => {
if(!sheetId) throw new Error(`sheetId has to be set !`);
if(sheetId === "") throw new Error(`sheetId can't be empty !`);
if(!privateKey) throw new Error(`privateKey has to be set !`);
if(privateKey === "") throw new Error(`privateKey can't be empty !`);
if(!clientEmail) throw new Error(`clientEmail has to be set !`);
if(clientEmail === "") throw new Error(`clientEmail can't be empty !`);
callback({
outputFields: {
sheetId,
privateKey,
clientEmail
}
});
}
Add a second custom code block for our logic
The code is available on this link you can use the copy button on the top right corner.
Important after you pasted the code add the following line on top of the file
const SECRET_NAME = "privateAppToken";
Between the quote put the name of your token, in my case my token is named privateAppToken
but if yours is different edit it.
So the Secret selected and the SECRET_NAME
should match like what we have on this screenshot :
Add a third block to assign
/*
*
* Edit your Secret Name here
*/
const SECRET_NAME = "privateAppToken"
const OBJECTS_TYPE = "tickets"
/*
*
*
* Only edit below this line if you know what you are doing
*
*
*/
const axios = require('axios');
const SECRET_NAME_TO_USE = SECRET_NAME ? SECRET_NAME : "privateAppToken";
const axiosConfig = {
headers: {
authorization: `Bearer ${process.env[SECRET_NAME_TO_USE]}`
}
};
exports.main = async (event, callback) => {
const ownerId = event.inputFields.ownerId;
if (!ownerId) throw new Error('ownerId is not set, are you sure you put ownerId in the "properties to include in code" ? ');
const objectId = event.object.objectId;
const user = await getUserDataById(ownerId).catch(axiosErrorHandler);
if(!user.data) throw new Error(`Error when getting ${ownerId} infos`)
const {email,id} = user.data;
const update = await updateOwnerId(objectId,id).catch(axiosErrorHandler)
if (!update.data) throw new Error(`We couldn't update the object owner`);
if(update.data.id) console.log(`Association worked ! ${email} is now the owner`)
callback({
outputFields: {
newOwner : update.data && update.data.id ? email : null
}
});
}
/**
* From a userId we can get an owner id
* @param {*} userId
* @returns
*/
const getUserDataById = async (ownerId) => {
const endPoint = `https://api.hubapi.com/crm/v3/owners/${ownerId}?idProperty=userId`;
const data = await axios.get(endPoint, axiosConfig);
return data;
}
const updateOwnerId = async (objectId,ownerId) => {
if (!ownerId) throw new Error('ownerId is not set as a parameter');
if (!objectId) throw new Error('objectId is not set as a parameter');
const endpoint = `https://api.hubapi.com/crm/v3/objects/${OBJECTS_TYPE}/${objectId}`;
return axios.patch(endpoint, {
"properties": {
"hubspot_owner_id": ownerId
}
}, axiosConfig);
}
/**
* Handles errors thrown by axios requests and logs relevant information.
*
* @param {Error} error - The error object thrown by axios.
*/
const axiosErrorHandler = error => {
if (error.response) {
// The request was made and the server responded with a status code
// that falls out of the range of 2xx
console.log(error.response.data);
console.log(error.response.status);
console.log(error.response.headers);
} else if (error.request) {
// The request was made but no response was received
// `error.request` is an instance of XMLHttpRequest in the browser
// and an instance of http.ClientRequest in node.js
console.log(error.request);
} else {
// Something happened in setting up the request that triggered an Error
console.log('Error', error.message);
}
}
Turn the WorkFlow on and set the backups
