what you will learn
- How to use ID tags to manage cache invalidation and fetching
- How to work with RTK query cache outside of React
- Techniques for manipulating response data
- Implementation of optimistic updates and streaming updates
previous requirements
- end ofpart 7to understand the setup and basic use of RTK Query
introduction
InPart 7: Basics of RTK querieswe saw how to configure and use the RTK Query API to handle data retrieval and caching in our application. We added an "API Segment" to our Redux store, defined "Query" endpoints to retrieve post data and a "Mutations" endpoint to add a new post.
In this section, we continue migrating our sample app to use RTK query for the other data types and see how some of its advanced features are used to simplify the code base and improve the user experience.
Information
Some of the changes in this section aren't strictly necessary: they're included to demonstrate the capabilities of RTK Inquiry and show some of the things thatlatado so you can see how to use these features when you need them.
mail processing
We've already added a mutate endpoint to save new post entries on the server and use them in ours<AddPostForm>
. Next we need to take care of updating the<EditPostForm>
to allow us to edit an existing post.
Post edit form update
As with adding posts, the first step is to define a new mutation endpoint in our API segment. This looks very similar to the add post mutation, but the endpoint must include the post ID in the URL and use an HTTPPATCH
to indicate that you are updating some of the fields.
features/api/apiSlice.js
export ConstantlyapiSlice= createapi({
Gang: 'API',
baseQuery: buscarBaseQuery({ URL-Basis: '/falsaApi' }),
label types: ['To Post'],
endpoints: constructor => ({
getPosts:constructor.inquiry({
inquiry: () => '/publications',
provides labels: ['To Post']
}),
getPost:constructor.inquiry({
inquiry: publications => `/publications/p.spublications}`
}),
addNewPost:constructor.Mutation({
inquiry: initial contribution => ({
URL: '/publications',
Method: 'TO POST',
body:initial contribution
}),
invalidates tags: ['To Post']
}),
Edit Post:constructor.Mutation({
inquiry: to post => ({
URL: `/publications/p.sto post.I WOULD}`,
Method: 'PATCH',
body:to post
})
})
})
})
export Constantly {
Use GetPostsQuery,
usarGetPostQuery,
Use AddNewPostMutation,
useEditPostMutation
} =apiSlice
Once it's added we can update it<EditPostForm>
. You have to read the originalto post
Save the input, use it to initialize the component's state to edit the fields, and then push the updated changes to the server. We're reading them right nowto post
input withselectPostById
, and manually send apostUpdated
Thank you for your request.
We can use the sameusarGetPostQuery
hook we use<Individual Post Page>
to read theto post
cache entry in the store and we use the new oneuseEditPostMutation
Hook to save changes.
features/posts/EditPostForm.js
import React, {useState} since 'react'
import {use history} since 'react-router-dom'
import { Spinner } since '../../componentes/Spinner'
import {usarGetPostQuery,useEditPostMutation} since '../api/apiSlice'
export Constantly EditPostForm = ({Phosphor}) => {
Constantly {publications} =Phosphor.Parameter
Constantly { Data:to post} = usarGetPostQuery(publications)
Constantly [updatePost, {loaded}] = useEditPostMutation()
Constantly [title,set title] = useState(to post.title)
Constantly [contents,setContent] = useState(to post.contents)
ConstantlyStory= use history()
Constantly deTitleChanged = mi => set title(mi.Goal.bravery)
Constantly onContentChanged = mi => setContent(mi.Goal.bravery)
Constantly onSavePostClicked = asynchronous () => {
and (title&&contents) {
expect updatePost({ I WOULD:publications,title,contents})
Story.To press(`/publications/p.spublications}`)
}
}
// Bypass rendering logic
}
Dauer des Cache-Datensubscriptions
Let's try that and see what happens. Open your browser's DevTools, go to the Network tab and refresh the main page. You should see oneRECEIVE
application/publications
how we get the initial data. When you click the View Post button, you should see a second prompt/posts/:postId
which returns this single post entry.
Now click on “Edit Post” on the individual post page. The user interface is displayed<EditPostForm>
, but this time there is no network requirement for the individual post. Why not?
RTK query allows multiple components to subscribe to the same data and ensures that each unique dataset is retrieved only once.Internally, the RTK query maintains a base count of active "subscriptions" for each endpoint key and cache combination. When component A callsusarGetPostQuery(42)
, this data will be restored. When Component B hangs up and also callsusarGetPostQuery(42)
, is exactly the same data that is requested. Both hook usages return exactly the same results, including fetched.Data
and load status flags.
When the number of active subscriptions reaches 0, the RTK poll starts an internal timer.If the timer expires before adding new subscriptions to the data, RTK Query automatically removes that data from the cache., because the app no longer needs the data. However, when a new subscriptionesadded before the timer expires, the timer will be canceled and the data already cached will be used without the need to retrieve it.
In this case ours<Individual Post Page>
mounted and asked that this personto post
through identification. When we clicked edit post, the<Individual Post Page>
The component was unmounted from the router and the active subscription was removed due to unmounting. The RTK query immediately started a timer to delete data from this post. but<Edit entry page>
Component immediately mounted and subscribedto post
Data with the same cache key. So the RTK query canceled the timer and kept using the same cached data instead of retrieving it from the server.
Standard,Unused data is removed from the cache after 60 seconds, but this can be configured in the root API segment definition or overridden in individual endpoint definitions usingKeepUnusedDataFor
-Flag indicating the lifetime of the cache in seconds.
Overwriting certain items
Our<EditPostForm>
The component can now save the edited post to the server, but we have a problem. If we click "Save Post" while editing, we'll return to the<Individual Post Page>
, but it still shows the old data without the changes. That<Individual Post Page>
still uses the cacheto post
Input previously received. By the way, let's go back to the main page and check it out<list of publications>
, it also shows the old data.We need a way to force a restoreboththe only oneto post
Entry and the full list of publications.
Previously we saw how we can use "tags" to invalidate parts of our cached data. We declare that thegetPosts
query endpointoffersa'To Post'
label and that theaddNewPost
mutation endpointinvalidsthe same'To Post'
Hashtag. This way, every time we add a new post, we force the RTK query to pull the full list of posts from thegetQuery
final point
we could add one'To Post'
mark bothgetPost
advice and theEdit Post
Mutation, but that would force retrieval of all other individual posts. Happily,RTK polling allows us to define specific tags, allowing us to be more selective in invalidating data. These specific tags look like{Type: 'Post', ID: 123}
.
OurgetPosts
Query defined aprovides labels
Field that is an array of strings. Thatprovides labels
The field can also accept a callback function that receives theresult
jStreit
and returns an array. This allows us to create tag entries based on retrieved data IDs. Similar,invalidates tags
it can also be a recall.
To get the right behavior, we need to configure each endpoint with the right labels:
getPosts
: provides a general'To Post'
Designation for the entire list, as well as for a specific one{type: 'post', id}
Label for each received post objectgetPost
: offers a{type: 'post', id}
Object for the single post objectaddNewPost
: overrides the general one'To Post'
tag to get the full listEdit Post
: overrides the specific{type: 'post', id}
Hashtag. This will force a restore of bothIndividuallyPost bygetPost
, just like himCompletelyList of publications bygetPosts
, because they both offer a suitable label{Typ, ID}
Bravery.
features/api/apiSlice.js
export ConstantlyapiSlice= createapi({
Gang: 'API',
baseQuery: buscarBaseQuery({ URL-Basis: '/falsaApi' }),
label types: ['To Post'],
endpoints: constructor => ({
getPosts:constructor.inquiry({
inquiry: () => '/publications',
provides labels: (result= [],Error,Streit) => [
'To Post',
...result.Map(({I WOULD}) => ({ writes: 'To Post',I WOULD}))
]
}),
getPost:constructor.inquiry({
inquiry: publications => `/publications/p.spublications}`,
provides labels: (result,Error,Streit) => [{ writes: 'To Post', I WOULD:Streit}]
}),
addNewPost:constructor.Mutation({
inquiry: initial contribution => ({
URL: '/publications',
Method: 'TO POST',
body:initial contribution
}),
invalidates tags: ['To Post']
}),
Edit Post:constructor.Mutation({
inquiry: to post => ({
URL: `publications/p.sto post.I WOULD}`,
Method: 'PATCH',
body:to post
}),
invalidates tags: (result,Error,Streit) => [{ writes: 'To Post', I WOULD:Streit.I WOULD }]
})
})
})
It is possible for himresult
The argument in these callbacks is undefined if the response doesn't contain any data or there's an error, so we need to handle that safely. ForgetPosts
We can do this using a standard argument array value to map to and togetPost
We're already returning a single element array based on the argument's ID. ForEdit Post
, we know the post ID of the partial post object passed to the activation function, so we can read it from there.
With these changes made, let's go back and try editing a post again with the Network tab open in the DevTools browser.
This time, when we save the edited post, we should see two consecutive requests:
- That
PATCH /posts/:postId
of theEdit Post
Mutation - A
GET /posts/:postId
As thegetPost
The query is retrieved
If we then click on the main Posts tab again, we should also see the following:
- A
GET /posts
As thegetPosts
The query is retrieved
Since we are providing the relationships between the endpoints using labels,RTK Query knew it was necessary to retrieve the individual post and post list when we made this edit and the specific tag with that ID became invalid.- No further changes needed! In the meantime, while we were editing the post, the cache clear timer for thegetPosts
The data has expired, so it has been removed from the cache. when we open it<list of publications>
Component again, RTK Query recognized that the cached data was not there and fetched it.
There is a warning here. Specification of an airplane'To Post'
Etiquette andgetPosts
and cancel it inaddNewPost
, we actually forced a restore of allIndividuallyposts too. If we really want to get the list of posts for thegetPost
Endpoint you can insert an additional tag with any ID, e.g{type: 'post', id: 'list'}
, and override that tag instead. Have RTK reference documentsa table describing what happens when certain general/specific tag combinations are invalidated.
Information
RTK polling offers many other options to control when and how data is retrieved, including conditional polling, deferred polling, and preloading. The query definitions can be customized in a number of ways. For more information on using these features, see the RTK Query Usage Guide docs:
user data management
We have just completed the transition of our postal data management to using RTK Query. Next, let's convert the list of users.
Since we've already seen how RTK query hooks are used to fetch and read data, we'll try a different approach in this section. The core RTK Query API is UI-agnostic and can be used with any UI layer, not just React. Normally you should still use the hooks, but here we will work with the user dataSolothe main RTK query API so you can see how it is used.
Get users manually
We currently define aretrieve users
Asynchronous processorusersSlice.js
, and submit this thunk manuallyindex.js
so that the user list is available as soon as possible. We can do the same process with RTK Query.
We start with the definition of agetBenutzer
query endpoint atapiSlice.js
, similar to our existing endpoints. We export themuseGetUsersQuery
Hook just for consistency, but for now we won't be using it.
features/api/apiSlice.js
export ConstantlyapiSlice= createapi({
Gang: 'API',
baseQuery: buscarBaseQuery({ URL-Basis: '/falsaApi' }),
label types: ['To Post'],
endpoints: constructor => ({
// Omit other endpoints
getBenutzer:constructor.inquiry({
inquiry: () => '/user'
})
})
})
export Constantly {
Use GetPostsQuery,
usarGetPostQuery,
useGetUsersQuery,
Use AddNewPostMutation,
useEditPostMutation
} =apiSlice
If we examine the API segment object, it contains aendpoints
Array, with an endpoint object in it for each endpoint we define.
Each endpoint object contains:
- The same main query/mutation hook that we exported from the root API segment object, but named as
useQuery
Öusemutation
- For query endpoints, an additional set of query hooks for scenarios like "deferred queries" or partial subscriptions
- A lot ofUtilities "match".to check the
Pending/Completed/Rejected
Actions sent by requests for this endpoint - and
started
Thunk that triggers a request to this endpoint - A
Choose
function that createssaved selectorswhich can retrieve the cached result data + status entries for this endpoint
If we want to get the list of users outside of React, we can send thosegetUsers.initiate()
Think in our index file:
index.js
// Skip other imports
import {apiSlice} since './features/api/apiSlice'
asynchronous function principal() {
// Start our mock api server
expectemployees.beginning({ onUnhandledRequest: 'derivation' })
Shopping.to ship(apiSlice.endpoints.getBenutzer.started())
ReactDOM.Offer(
<React.StrictMode>
<salesperson Shopping={Shopping}>
<App />
</salesperson>
</React.StrictMode>,
document.getElementById('Root')
)
}
principal()
This submission is automatic within the query links, but we can initiate it manually if needed.
caution
Manually submitting an RTKQ request processor creates a subscription entry, but that's up to youlater opt out of this data- otherwise the data will remain permanently in the cache. In this case, we always need the user's data so that we can skip the opt-out.
Selection of user data
We currently have selectors likeselect all users
jselectUserById
generated by ourscrearEntityAdapter
User adapter and read outstatus.user
. When we reload the page, all of our user-facing display breaks because thestatus.user
Slice contains no data. Now that we're retrieving data for the RTK query cache, we need to replace these selectors with equivalents that read from the cache.
Thatpoint final.select()
The function in the endpoints of the API segment creates a new stored choice functionAllwe call it time.Select()
takes a cache key as an argument, and this has to be itsameCache key passed as an argument to query hooks or to thestarted()
blow. The generated selector uses this cache key to know exactly which cached result to return from the cached state in memory.
In this case oursgetBenutzer
Endpoint takes no parameters; We always get the full list of users. So we can create a cache selector with no arguments and the cache key will beNot defined
.
features/users/usersSlice.js
import {
create flock,
crearEntityAdapter,
crearSelector
} since '@reduxjs/toolkit'
import {apiSlice} since '../api/apiSlice'
/* Temporarily ignore the adapter - we'll be using it again shortly
const adaptadorusuarios = createEntityAdapter()
const estadoInicial = UserAdapter.getInitialState()
*/
// Calling `someEndpoint.select(someArg)` generates a new selector that returns
// the query result object for a query with these parameters.
// To generate a selector for a specific query argument, call `select(theQueryArg)`.
// In this case, the user query has no parameters, so we don't pass anything to select()
export ConstantlyselectUsersResult=apiSlice.endpoints.getBenutzer.Choose()
Constantlyempty users= []
export Constantlyselect all users= crearSelector(
selectUsersResult,
users result =>users result?.Data??empty users
)
export ConstantlyselectUserById= crearSelector(
select all users,
(To express,user identification) =>user identification,
(user,user identification) =>user.meeting(username =>username.I WOULD ===user identification)
)
/* Temporarily ignore selectors - we'll get to that later
constant export {
select all: selects all users
selectById: selectUserById,
} = UserAdapter.getSelectors((state) => state.users)
*/
Once we have that initialselectUsersResult
selector we can replace the existing oneselect all users
selector with one that returns the user array from the cache result and then replaces itselectUserById
with one that finds the right user of that array.
For now, we will comment on these selectorsuser adapter
- We will make another change later that reuses these.
Our components are already importedselect all users
jselectUserById
, so this change should work! Try refreshing the page and click on the post list and single post view. The correct usernames must appear on each post you view and in the dropdown menu<AddPostForm>
.
Of theuser slice
are no longer used at all, we can go ahead and remove themcreate flock
call and delete from this fileUser: reducer
from the configuration of our shop. We still have a few bits of code that reference itPostsSlice
, so we can't remove it yet; we will address it shortly.
inject endpoints
For larger applications it is common to "break" the code of functions into separate packages and then "lazy load" them as needed when the function is first used. We said that the RTK query typically only has one "API slice" per application, and so far we've defined all our endpoints directly in itapiSlice.js
. What if we want to hard split some of our endpoint definitions or move them to another file to prevent the API split file from getting too big?
RTK query supports splitting endpoint definitions withapiSlice.injectEndpoints()
. This way we can still have a single API slice with a single middleware and cache reducer, but we can move the definition of some endpoints to other files. This allows for code splitting scenarios as well as co-location of some endpoints along with the function folders if needed.
To illustrate this process, let's change thegetBenutzer
Endpoint to inject intousersSlice.js
, instead of defined inapiSlice.js
.
we are already importingapiSlice
inusersSlice.js
so we can access itgetBenutzer
Period so we can switch to callingapiSlice.injectEndpoints()
here instead.
features/users/usersSlice.js
import {apiSlice} since '../api/apiSlice'
export ConstantlyadvancedApiSlice=apiSlice.inject endpoints({
endpoints: constructor => ({
getBenutzer:constructor.inquiry({
inquiry: () => '/user'
})
})
})
export Constantly {useGetUsersQuery} =advancedApiSlice
export ConstantlyselectUsersResult=advancedApiSlice.endpoints.getBenutzer.Choose()
inject endpoints()
mutates the original API segment object to add the additional endpoint definitions and then returns it. The cache reducer and middleware that we originally added to the store are still working fine. In this point,apiSlice
jadvancedApiSlice
are the same object, but it can be useful to refer to themadvancedApiSlice
object insteadapiSlice
here as a reminder to ourselves. (This is more important if you're using TypeScript, since only theadvancedApiSlice
The types for the new endpoints have been added to the value).
At the moment the only file referenced is thegetBenutzer
Endpoint is our index file that sends thestarted
blow. We need to update that to import the extended API segment instead:
index.js
// Skip other imports
-import {apiSlice} from './features/api/apiSlice'
+import {extendedApiSlice} from './features/users/usersSlice'
asynchronous function main() {
// Start our mock api server
wait worker.start ({ onUnhandledRequest: 'bypass' })
-store.dispatch(apiSlice.endpoints.getUsers.initiate())
+store.dispatch (extendedApiSlice.endpoints.getUsers.initiate())
ReactDOM.render(
<React.StrictMode>
<supplier business={business}>
<Application />
</vendor>
</React.ModoEstricto>,
document.getElementById('root')
)
}
Principal()
Alternatively, you could just export the specific endpoints from the split file.
response data manipulation
So far all of our query endpoints have simply stored the response data from the server exactly as received in the body.getPosts
jgetBenutzer
both expect the server to return an array, andgetPost
awaits the individualto post
object like body.
It is common for clients to extract chunks of data from the server response or transform the data in some way before it is cached. For example, what if the/getPost
Request returns a body like{Message ID}}
, with nested data?
There are a few ways we docoulddeal with it conceptually. One option would be to extract thatrespuestaData.post
field and cache it instead of the whole body. Another would be to cache all of the response data, but let our components specify only a specific portion of the cached data that they need.
transformative answers
Endpoints can define atransformResponse
Controller that can extract or modify data received from the server before it is cached. For himgetPost
example we could havetransformResponse: (response data) => response data.post
, and would only cache the realto post
object instead of the entire response body.
InPart 6: Performance and Standardizationwe discuss the reasons why it makes sense to store data in a normalized structure. In particular, it allows us to search and update elements based on an ID instead of having to iterate through an array to find the right element.
OurselectUserById
selector currently has to iterate through the cached array of users to find the rightuser
Object. If we transformed the response data to store it using a normalized approach, we could simplify it to find the user directly by ID.
Before we usedcrearEntityAdapter
inuser slice
to manage normalized user data. we can integratecrearEntityAdapter
in ouradvancedApiSlice
, and actually use itcrearEntityAdapter
to transform the data before it is cached. We'll comment them outuser adapter
rows we originally had and reuse the update functions and selectors.
features/users/usersSlice.js
import {apiSlice} since '../api/apiSlice'
Constantlyuser adapter= crearEntityAdapter()
Constantlyinitial state=user adapter.getInitialStatus()
export ConstantlyadvancedApiSlice=apiSlice.inject endpoints({
endpoints: constructor => ({
getBenutzer:constructor.inquiry({
inquiry: () => '/user',
transformResponse: response data => {
hand backuser adapter.put all(initial state,response data)
}
})
})
})
export Constantly {useGetUsersQuery} =advancedApiSlice
// Calling `someEndpoint.select(someArg)` generates a new selector that returns
// the query result object for a query with these parameters.
// To generate a selector for a specific query argument, call `select(theQueryArg)`.
// In this case, the user query has no parameters, so we don't pass anything to select()
export ConstantlyselectUsersResult=advancedApiSlice.endpoints.getBenutzer.Choose()
ConstantlySelect user data= crearSelector(
selectUsersResult,
users result =>users result.Data
)
export Constantly { Choose All:select all users, selectById:selectUserById} =
user adapter.getSelectors(To express => Select user data(To express) ??initial state)
We added onetransformResponse
option togetBenutzer
endpoint Receives the entire response data body as an argument and must return the actual data for caching. vocationusersAdapter.setAll(initial state, response data)
, returns the default{identifiers: [], entities: {}}
normalized data structure containing all received elements.
Thatadapter.getSelectors()
the function must be given an "input selector" so that it knows where to find the normalized data. In this case, the data is nested in the RTK query cache reducer, so we select the correct field outside of the cache state.
Document and normalized caches
It's worth stepping back for a minute to talk more about what we just did.
You may have heard the term "normalized cache" in the context of other data retrieval libraries such as Apollo. It's important to understand thatThe RTK query uses a "document cache" approach, not a "normalized cache"..
A fully normalized cache attempts to deduplicate similar itemsAllQueries based on item type and ID. Suppose we have an API segment withtake all
jget to do
Endpoints and our components run the following queries:
take all()
getAll({filter: 'odd'})
getAll({id: 1})
Each of these query results would contain a Todo object that looks like this{id: 1}
.
A fully normalized deduplication cache would store only a single copy of that todo item. However,RTK Query caches each query result independently. So this would result in three separate copies of that task being cached in Redux storage. However, if all endpoints consistently provide the same tags (e.g{Type: 'All', ID: 1}
), and invalidating this tag will force all matching endpoints to fetch their data for consistency.
Consciously query RTKit doesnoImplement a cache that deduplicates identical items across multiple requests. There are many reasons for that:
- A fully normalized shared cache between queries is aStandsolve a problem
- We don't currently have the time, resources, or interest to find out.
- In many cases, just getting the data back if it's been invalidated works well and is easier to understand.
- At the very least, RTKQ can help solve the common "get some data" use case that's a big problem for many people.
For comparison, we simply normalize the response data for thegetBenutzer
Endpoint, in the sense that it is stored as{[id]: bravery}
reference book. However,This isnothe same as a "normalized cache": we simply transformHow is this answer saved?rather than deduplicate results between endpoints or requests.
Selection of result values
The last component reading from the old onePostsSlice
es<user page>
, which filters the list of posts based on the current user. We've already seen that we can get the full list of releases withuseGetPostsQuery()
and then convert it to the component, like sorting in ause hint
. Query hooks also give us the ability to select parts of the cached state by providing aselectFromResult
and only re-render when selected parts change.
we can useselectFromResult
have<user page>
only read a filtered list of posts from the cache. However with itselectFromResult
To avoid unnecessary rendering, we need to ensure that all the data we extract is saved correctly. To do this, we need to create a new selector instance that uses the<user page>
The component can be reused each time it's rendered, so the selector saves the result based on your inputs.
features/users/UsersPage.js
import {crearSelector} since '@reduxjs/toolkit'
import {selectUserById} since '../users/SliceUsers'
import {Use GetPostsQuery} since '../api/apiSlice'
export Constantly user page = ({Phosphor}) => {
Constantly {user identification} =Phosphor.Parameter
Constantlyusername= useSelector(To express => selectUserById(To express,user identification))
ConstantlyselectPostsForUser= use hint(() => {
Constantlyempty array= []
// Return a single selector instance for this page so that
// The filtered results are saved correctly
hand back crearSelector(
resolution =>resolution.Data,
(resolution,user identification) =>user identification,
(Data,user identification) =>Data?.Filter(to post =>to post.username ===user identification) ??empty array
)
}, [])
// Use the same Posts query but only extract part of your data
Constantly {postsForUser} = Use GetPostsQuery(Not defined, {
selectFromResult: result => ({
// We can optionally include the other metadata fields of the result here
...result,
// Add a field called `postsForUser` to the result object of the hook,
// This is a filtered list of posts
postsForUser: selectPostsForUser(result,user identification)
})
})
// Bypass rendering logic
}
There is one key difference from the saved choice function we created here. Usually,Selectors expect the entire Redux state as the first argument, and extract or derive a value from itTo express
. In this case, however, we are only dealing with the "Result" value that is held in the cache. The result object has aData
inner field with the actual values we need as well as some of the request metadata fields.
OurselectFromResult
callback receives theresult
Object containing the metadata of the original request and theData
from the server and should return some extracted or derived values. Because query hooks add aRestoration
method to what is returned here, it is preferable to always return an object ofselectFromResult
with the fields in it that you need.
Sinceresult
it is kept in redux storage, we cannot mutate it, we must return a new object. The query binding performs a "shallow" comparison on this returned object and only re-renders the component if any of the fields have changed. We can optimize re-renderings by returning only the specific fields required by that component; If we don't need the rest of the metadata flags, we can omit them altogether. If you need them, you can spread the originalresult
Value to include in the output.
In this case we call the fieldpostsForUser
, and we can destructure this new field from the result of the link. vocationselectPostsForUser(result, userid)
Each time, the filtered matrix is saved and recalculated only when the retrieved data or user ID changes.
Comparison of transformation approaches
We have now seen three different ways we can handle transformative responses:
- Cache the original response, read the full result in the component and retrieve the values
- Cache the original answer, read along with the derived result
selectFromResult
- Transform the response before caching
Each of these approaches can be useful in different situations. Here are some suggestions on when to use them:
transformResponse
Note: All endpoint consumers want a specific format, e.g. B. Response normalization to allow faster lookup by IDselectFromResult
– Some endpoint consumers only need partial data, e.g. B. a filtered list- per component /
use hint
- when only some specific components need to transform the cached data
Extended cache updates
We've finished updating our posts and user data so all you have to do is deal with reactions and notifications. The move to RTK polling gives us the opportunity to test some of the advanced techniques available for working with cached RTK polling data and allows us to provide a better experience for our users.
persistent reactions
Initially, we only track responses on the client side and do not keep them on the server. Let's add a new oneaddReaktion
mutation and use it to upgrade the appropriate oneto post
on the server each time the user clicks a response button.
features/api/apiSlice.js
export ConstantlyapiSlice= createapi({
Gang: 'API',
baseQuery: buscarBaseQuery({ URL-Basis: '/falsaApi' }),
label types: ['To Post'],
endpoints: constructor => ({
// Omit other endpoints
addReaktion:constructor.Mutation({
inquiry: ({publications,Reaction}) => ({
URL: `publications/p.spublications}/reactions`,
Method: 'TO POST',
// In a real application, we'd probably have to rely on the user's ID somehow
// so that a user can't run the same reaction more than once
body: {Reaction}
}),
invalidates tags: (result,Error,Streit) => [
{ writes: 'To Post', I WOULD:Streit.publications }
]
})
})
})
export Constantly {
Use GetPostsQuery,
usarGetPostQuery,
Use AddNewPostMutation,
useEditPostMutation,
useAddReactionMutation
} =apiSlice
Similar to our other mutations, we take some parameters and make a request to the server, with some data in the request body. Because this example app is small, in this post we just provide the reaction name and let the server increment the counter for this kind of reaction.
We already know that we need to fetch this post to see some data change on the client so we can override thisto post
Entry based on your ID.
If that's there, let's update it<Response Keys>
Use this mutation.
features/posts/ReactionButtons.js
import React since 'react'
import {useAddReactionMutation} since '../api/apiSlice'
ConstantlyReactionEmoji= {
thumbs up: '👍',
Hurray: '🎉',
Herz: '❤️',
rocket: '🚀',
Eyes: '👀'
}
export Constantly reaction buttons = ({to post}) => {
Constantly [addReaktion] = useAddReactionMutation()
Constantlyreaction buttons= Object.starter(ReactionEmoji).Map(
([reaction name,Smileys]) => {
hand back (
<Knopf
key={reaction name}
writes="Knopf"
class name="Mute button Response button"
click={() => {
addReaktion({ publications:to post.I WOULD, Reaction:reaction name})
}}
>
{Smileys} {to post.reactions[reaction name]}
</Knopf>
)
}
)
hand back <classification>{reaction buttons}</classification>
}
Let's see this in action! go to the main part<list of publications>
and click on any of the reactions to see what happens.
uh oh The totality<list of publications>
Component was greyed out because we just retrieved itCompletelyList of posts in response to updating this post. This is intentionally more visible as our mock api server is configured to have a 2 second delay before responding, but even if the response is faster it's not a good user experience.
Provision of optimistic updates
For a small update like adding a reaction, we probably don't need to revisit the full list of posts. Instead, we could try simply updating the already cached data on the client to match what we expect on the server. Additionally, if we update the cache immediately, the user will get instant feedback when they click the button instead of having to wait for the response.This approach of updating the client's status immediately is called "optimistic updating"., and is a common pattern in web applications.
RTK Query allows you to implement optimistic updates by modifying the client-side cache based on "request lifecycle" handlers. Endpoints can define aonQueryStarted
Function that is called when a request is started and we can run additional logic in this controller.
features/api/apiSlice.js
export ConstantlyapiSlice= createapi({
Gang: 'API',
baseQuery: buscarBaseQuery({ URL-Basis: '/falsaApi' }),
label types: ['To Post'],
endpoints: constructor => ({
// Omit other endpoints
addReaktion:constructor.Mutation({
inquiry: ({publications,Reaction}) => ({
URL: `publications/p.spublications}/reactions`,
Method: 'TO POST',
// In a real application, we'd probably have to rely on the user's ID somehow
// so that a user can't run the same reaction more than once
body: {Reaction}
}),
asynchronous onQueryStarted({publications,Reaction}, {to ship,request fulfilled}) {
// `updateQueryData` requires the endpoint name and cache key arguments,
// so it knows what part of the cache state needs to be updated
Constantlypatch result= to ship(
apiSlice.useful.updaterQueryData('getPosts', Not defined, Draft => {
// The `scratch` is wrapped in Always and can be "mutated" as in createSlice
Constantlyto post=Draft.meeting(to post =>to post.I WOULD ===publications)
and (to post) {
to post.reactions[Reaction]++
}
})
)
treat {
expectrequest fulfilled
} seize {
patch result.cancel()
}
}
})
})
})
ThatonQueryStarted
Handler takes two parameters. The first is the cache key.Streit
that was approved when the application started. The second is an object that contains some of the same fields as the first.ThunkApi
increarAsyncThunk
({Despacho, GetState, Extra, RequestId}
), but also aPromise
calledrequest fulfilled
. it isPromise
It will be resolved when the request returns and will match or deny the request.
The API segment object contains aupdaterQueryData
Helper function that we can use to update the cached values. Three arguments are required: the name of the endpoint to update, the same cache key value used to identify the specific cached data, and a callback that updates the cached data.updaterQueryData
Always used, so you can "mutate" redacted cache data the same way you would increate flock
.
We can implement the optimistic update by finding the specificto post
entry into thegetPosts
cache and "mutate" to increase the reaction counter.
updaterQueryData
generates an action object with a patch difference of the changes we made. When we send this action, the return value is apatch result
Object. when we callpatchResult.undo()
, it automatically sends an action that undoes patch difference changes.
By default, we expect the request to succeed. If the request fails, we can do thatWaiting Query Completed
, detect an error and roll back the patch changes to roll back the optimistic update.
In this case, we removed those as wellinvalidates tags
line we just added therenoWe want to get the posts when we click a reaction button.
Now, when we quickly click a reaction button multiple times, we should see the number increase in the UI each time. Also, if we look at the Network tab, we see that every single request is sent to the server.
Streaming-Cache-Updates
Our final feature is the Notifications tab. When we originally built this featurepart 6, we said that "in a real application, the server would send updates to our client when something happened." We first mocked this functionality by adding a refresh notifications button and running HTTPRECEIVE
Request more notification entries.
It is common for applications to aInitialrequest to retrieve data from the server, and then open a websocket connection to receive additional updates over time.RTK query provides aonCacheEntryAdded
Endpoint Lifecycle Controller, which allows us to implement "streaming updates" for cached data. We will use this capability to implement a more realistic approach to notification management.
Oursrc/api/server.js
A dummy websocket server is already configured for the file, similar to the dummy HTTP server. we write a new oneget notifications
Endpoint that receives the initial list of notifications and then establishes the websocket connection to wait for future updates. We still have to inform the mock server manuallyifto send new notifications, we still pretend we clicked a button to force the update.
we will inject themget notifications
endpoint onnotification slice
like us withgetBenutzer
, just to show that it's possible.
Features/Notifications/NotificationsSlice.js
import {forceGenerateNotifications} since '../../api/server'
import {apiSlice} since '../api/apiSlice'
export Constantlyadvanced API=apiSlice.inject endpoints({
endpoints: constructor => ({
get notifications:constructor.inquiry({
inquiry: () => '/Notifications',
asynchronous onCacheEntryAdded(
Streit,
{actualizarCachedData,cacheDataLoaded,cacheEntryEntfernt}
) {
// Create a websocket connection when the cache subscription starts
Constantlyws= neu WebSocket('ws://host local')
treat {
// Wait for the initial query to resolve before proceeding
expectcacheDataLoaded
// when data is received from the socket connection to the server,
// update the result of our query with the received message
Constantly listener = event => {
Constantlymessage= JSON.analyze(event.Data)
change (message.writes) {
Fall 'Notifications': {
actualizarCachedData(Draft => {
// Include any notifications received from the websocket
// in the existing RTKQ cache array
Draft.To press(...message.Useful charge)
Draft.Typ((a,b) =>b.Datum.Compare regional settings(a.Datum))
})
rest
}
Lack:
rest
}
}
ws.AggregatEventListener('Message',listener)
} seize {
// no operation if `cacheEntryRemoved` is resolved before `cacheDataLoaded`,
// In this case `cacheDataLoaded` is returned
}
// cacheEntryRemoved is resolved when the cache subscription is no longer active
expectcacheEntryEntfernt
// Perform cleanup steps after the cacheEntryRemoved promise is resolved
ws.Conclude()
}
})
})
})
export Constantly {useGetNotificationsQuery} =advanced API
Constantlyblank notifications= []
export ConstantlyselectNotificationsResult=
advanced API.endpoints.get notifications.Choose()
ConstantlyChoose Notification Data= crearSelector(
selectNotificationsResult,
NotificationsResult =>NotificationsResult.Data ??blank notifications
)
export Constantly lookupNotificationsWebsocket = () => (to ship,get status) => {
Constantlyall notifications= Choose Notification Data(get status())
Constantly [Last notification] =all notifications
Constantlylast timestamp=Last notification?.Datum?? ''
// Code a simulated server call to simulate a server push scenario over websockets
forceGenerateNotifications(last timestamp)
}
// Skip existing branch code
as inonQueryStarted
, DieonCacheEntryAdded
the lifecycle manager gets theStreit
cache key as the first parameter and an options object with theThunkApi
Values as the second parameter. The options object also contains oneactualizarCachedData
Utility function and two life cyclesPromise
s -cacheDataLoaded
jcacheEntryEntfernt
.cacheDataLoaded
is resolved when theInitialthe data for this subscription will be added to the store. This happens when the first subscription for this endpoint + cache key is added. As long as there is more than 1 active participant for the data, the cache entry remains active. When the number of subscribers reaches 0 and the cache lifetime expires, the cache entry is removed andcacheEntryEntfernt
Typically, the usage pattern is as follows:
esperar cacheDataLoaded
right away- Create a server-side data subscription as a web socket
- When an update is received, use
actualizarCachedData
to "mutate" cached values based on update. esperar cacheEntryEntfernt
everything final- Then clean up the subscriptions
Our mock websocket server file creates aforceGenerateNotifications
Method to mimic sending data to the client. That depends on us knowing the latest notification timestamp, so we add a processor that we can send that reads the latest timestamp from the cache state and tells the mock server to generate newer notifications.
InsideonCacheEntryAdded
we create a real oneWebsocket
connection topremises Server
. In a real application, this could be any type of external subscription or polling connection you need to get continuous updates. Every time the simulated server sends us an update, we cache and reorder all the received notifications.
When the cache entry is removed, we clean up the websocket subscription. In this app, the notification cache entry is never cleared as we never unsubscribe from the data, but it's important to see how the cleanup would work for a real app.
Client-side status tracking
We have one last set of updates to do. Our<Navigation Bar>
The component must start receiving notifications and<Notification List>
You must view the notification entries with the correct read/unread status. However, before that we added the read/unread fields on the client side in ournotification slice
reduce when we get the inputs and now the notification inputs are stored in the RTK query cache.
we can rewritenotification slice
so that it listens for notifications it receives and keeps track of additional client-side state for each notification input.
There are two cases when new notification entries are received: when we get the original list over HTTP and when we get an update sent over the websocket connection. Ideally, we want to use the same logic in both cases. We can use RTK"matching utilities"to write a case reducer that runs in response to multiple action types.
Let's see whatnotification slice
it seems after we add this logic.
Features/Notifications/NotificationsSlice.js
import {
create action,
create flock,
crearEntityAdapter,
crearSelector,
is one of
} since '@reduxjs/toolkit'
import {forceGenerateNotifications} since '../../api/server'
import {apiSlice} since '../api/apiSlice'
Constantlyreceive notifications= create action(
'Notifications/receive notifications'
)
export Constantlyadvanced API=apiSlice.inject endpoints({
endpoints: constructor => ({
get notifications:constructor.inquiry({
inquiry: () => '/Notifications',
asynchronous onCacheEntryAdded(
Streit,
{actualizarCachedData,cacheDataLoaded,cacheEntryEntfernt,to ship}
) {
// Create a websocket connection when the cache subscription starts
Constantlyws= neu WebSocket('ws://host local')
treat {
// Wait for the initial query to resolve before proceeding
expectcacheDataLoaded
// when data is received from the socket connection to the server,
// update the result of our query with the received message
Constantly listener = event => {
Constantlymessage= JSON.analyze(event.Data)
change (message.writes) {
Fall 'Notifications': {
actualizarCachedData(Draft => {
// Include any notifications received from the websocket
// in the existing RTKQ cache array
Draft.To press(...message.Useful charge)
Draft.Typ((a,b) =>b.Datum.Compare regional settings(a.Datum))
})
// Send an extra action so we can track the "read" status
to ship(receive notifications(message.Useful charge))
rest
}
Lack:
rest
}
}
ws.AggregatEventListener('Message',listener)
} seize {
// no operation if `cacheEntryRemoved` is resolved before `cacheDataLoaded`,
// In this case `cacheDataLoaded` is returned
}
// cacheEntryRemoved is resolved when the cache subscription is no longer active
expectcacheEntryEntfernt
// Perform cleanup steps after the cacheEntryRemoved promise is resolved
ws.Conclude()
}
})
})
})
export Constantly {useGetNotificationsQuery} =advanced API
// Skip selectors and websocket processor
Constantlynotification adapter= crearEntityAdapter()
ConstantlyMatchNotificationsReceived= is one of(
receive notifications,
advanced API.endpoints.get notifications.party fulfilled
)
Constantlynotification slice= create flock({
Name: 'Notifications',
initial state:notification adapter.getInitialStatus(),
Reductions: {
allNotificationsRead(To express,Plot) {
Object.Values(To express.entities).for each(notification => {
notification.file = true
})
}
},
extraReduzierer(constructor) {
constructor.aggregate matches(MatchNotificationsReceived, (To express,Plot) => {
// Add client-side metadata to track new notifications
ConstantlyNotificationsMetadata=Plot.Useful charge.Map(notification => ({
I WOULD:notification.I WOULD,
file: NOT CORRECT,
it's new: true
}))
Object.Values(To express.entities).for each(notification => {
// Every notification we've read is no longer new
notification.it's new = !notification.file
})
notification adapter.upsertViele(To express,NotificationsMetadata)
})
}
})
export Constantly {allNotificationsRead} =notification slice.behavior
export Lacknotification slice.reducer
export Constantly {
Choose All:Choose NotificationsMetadata,
Select entities:Select metadata entities
} =notification adapter.getSelectors(To express =>To express.notifications)
There's a lot going on, but let's take a look at the changes one by one.
There is currently no good way for thatnotification slice
Collapse to see when we've received an updated list of new Websocket notifications. So we will importcreate action
, define a new action type specifically for the case "I received some notifications" and send this action after updating the cache status.
We want to do the same logic to add read/new metadataboththe complimentget notifications
" Plotjthe "Receive from Websocket" action. We can create a new "match" function by callingisAnyOf()
and happens in each of these action creators. ThatMatchNotificationsReceived
The matcher function returns true if the current action matches any of these types.
Previously, we had a normalized lookup table for all of our notifications, and the UI selected them as a single sorted array. We will reuse this segment to store "metadata" objects that describe the read/unread status.
We can use thoseConstructor.addMatcher()
API insideextraReduzierer
to add a case reducer that will be executed every time we find one of these two action types. In it, we add and store a new "read/isNew" metadata entry corresponding to each notification by IDnotification slice
.
Finally, we need to change the selectors that we are exporting from this segment. instead of exportingChoose All
likeSelect all notifications
, we export it asChoose NotificationsMetadata
. It still returns an array of the normalized status values, but we're changing the name because the elements themselves have changed. We will also export thoseSelect entities
selector that returns the lookup table object itself, such asSelect metadata entities
. This will come in handy when we try to use this data in the UI.
With these changes we can update our UI components to receive and display notifications.
application/Navbar.js
import React since 'react'
import {usoDespacho,useSelector} since 'react-redux'
import { shortcut } since 'react-router-dom'
import {
lookupNotificationsWebsocket,
Choose NotificationsMetadata,
useGetNotificationsQuery
} since '../features/notifications/notificationsSlice'
export Constantly navigation bar = () => {
Constantlyto ship= usoDespacho()
// Enable initial checking for notifications and keep websocket open to receive updates
useGetNotificationsQuery()
ConstantlyNotificationsMetadata= useSelector(Choose NotificationsMetadata)
ConstantlynumUnreadNotifications=NotificationsMetadata.Filter(
Norte => !Norte.file
).Long
Constantly searchNewNotifications = () => {
to ship(lookupNotificationsWebsocket())
}
LeaveBadge for unread notifications
and (numUnreadNotifications> 0) {
Badge for unread notifications= (
<keep class name="insignia">{numUnreadNotifications}</keep>
)
}
// Bypass rendering logic
}
In<Navigation Bar>
, we turn on receiving initial notificationsuseGetNotificationsQuery()
and switch to read the metadata objectsstate.notificationsSlice
. Clicking the refresh button now triggers the dummy websocket server to send another set of notifications.
Our<Notification List>
It changes similar to reading cached data and metadata.
features/notifications/NotificationsList.js
import {
useGetNotificationsQuery,
allNotificationsRead,
Select metadata entities,
} since './Notification Slice'
export Constantly notification list = () => {
Constantlyto ship= usoDespacho()
Constantly { Data:notifications= [] } = useGetNotificationsQuery()
ConstantlyNotificationsMetadata= useSelector(Select metadata entities)
Constantlyuser= useSelector(select all users)
useLayoutEffect(() => {
to ship(allNotificationsRead())
})
ConstantlyrenderNotifications=notifications.Map((notification) => {
ConstantlyDatum= parseISO(notification.Datum)
Constantlylong time= FormatDistanciaHastaAhora(Datum)
Constantlyusername=user.meeting((username) =>username.I WOULD ===notification.username) || {
Name: 'Unknown User',
}
Constantlymetadata=NotificationsMetadata[notification.I WOULD]
Constantlynotification class name= class names('Notification', {
neu:metadata.it's new,
})
// Bypass rendering logic
}
We read the list of notifications from the cache and the new notification sector metadata entries and continue to display them in the same way as before.
As a final step, we can do an additional cleanup here: thePostsSlice
It is no longer used and can therefore be removed completely.
With that, we're done converting our app to use RTK Query! All data retrieval has been switched to using RTKQ and we've improved the user experience by adding optimistic upgrades and streaming upgrades.
what you learned
As we've seen, RTK Query includes some powerful options to control how we manage cached data. While you may not need all of these options right away, they provide important flexibility and capabilities for implementing application-specific behaviors.
Let's take one last look at the entire app in action:
Keep going
- Specific cache tags can be used for more granular cache invalidation
- Cache tags can be
'To Post'
Ö{type: 'post', id}
- Endpoints can provide or invalidate cache tags based on the results and cache keys arg
- Cache tags can be
- RTK query APIs are UI-agnostic and can be used outside of React
- Endpoint objects include functionality for launching requests, generating result selectors, and matching request action objects.
- Responses can be transformed in different ways as needed
- Endpoints can define a
transformResponse
Callback to modify data before caching - Hooks can be given a
selectFromResult
Option to extract/transform data - Components can read and transform an entire value
use hint
- Endpoints can define a
- RTK query has advanced options to edit cached data for better user experience
- That
onQueryStarted
The lifecycle can be used for optimistic updates by updating the cache just before returning a request. - That
onCacheEntryAdded
The lifecycle can be used to push updates by updating the cache over time based on push connections from the server.
- That
What's next?
Prost,You've completed the Redux Essentials tutorial!You should now have a solid understanding of what the Redux Toolkit and React-Redux are, how to write and organize Redux logic, Redux data flow and consumption with React, and how to use APIs and howconfigureStore
jcreate flock
. You should also see how RTK Query can simplify the process of retrieving and using cached data.
That"What's next?" section in Part 6contains links to additional resources for application ideas, tutorials, and documentation.
For more details on using RTK polling, seethe RTK Interrogation User Guide documentsjAPI reference.
If you're looking for help with Redux issues, easyDie#redux
Channel on the Reactiflux server on Discord.
Thank you for reading this tutorial and we hope you enjoy building applications with Redux!