1. LegalCopyright © 2012-2022 Show
Copies of this document may be made for your own use and for distribution to others, provided that you do not charge any fee for such copies and further provided that each copy contains this Copyright Notice, whether distributed in print or electronically. 2. Getting HelpIf you have trouble with Spring Boot, we would like to help.
3. Documentation OverviewThis section provides a brief overview of Spring Boot reference documentation. It serves as a map for the rest of the document. 4. Getting StartedIf you are getting started with Spring Boot, or “Spring” in general, start by reading this section. It answers the basic “what?”, “how?” and “why?” questions. It includes an introduction to Spring Boot, along with installation instructions. We then walk you through building your first Spring Boot application, discussing some core principles as we go. 4.1. Introducing Spring BootSpring Boot helps you to create stand-alone, production-grade Spring-based applications that you can run. We take an opinionated view of the Spring platform and third-party libraries, so that you can get started with minimum fuss. Most Spring Boot applications need very little Spring configuration. You can use Spring Boot to create Java applications that can be started by using Our primary goals are:
4.2. System RequirementsSpring Boot 2.7.5 requires Java 8 and is compatible up to and including Java 19. Spring Framework 5.3.23 or above is also required. Explicit build support is provided for the following build tools:
4.2.1. Servlet ContainersSpring Boot supports the following embedded servlet containers:
You can also deploy Spring Boot applications to any servlet 3.1+ compatible container. 4.3. Installing Spring BootSpring Boot can be used with “classic” Java development tools or installed as a command line tool. Either way, you need Java SDK v1.8 or higher. Before you begin, you should check your current Java installation by using the following command: If you are new to Java development or if you want to experiment with Spring Boot, you might want to try the Spring Boot CLI (Command Line Interface) first. Otherwise, read on for “classic” installation instructions. 4.3.1. Installation Instructions for the Java DeveloperYou can use Spring Boot in the same way as any standard Java library. To do so, include the appropriate Although you could copy Spring Boot jars, we generally recommend that you use a build tool that supports dependency management (such as Maven or Gradle). Maven InstallationSpring Boot is compatible with Apache Maven 3.3 or above. If you do not already have Maven installed, you can follow the instructions at maven.apache.org.
Spring Boot dependencies use the More details on getting started with Spring Boot and Maven can be found in the Getting Started section of the Maven plugin’s reference guide. Gradle InstallationSpring Boot is compatible with Gradle 6.8, 6.9, and 7.x. If you do not already have Gradle installed, you can follow the instructions at gradle.org. Spring Boot dependencies can be declared by using the More details on getting started with Spring Boot and Gradle can be found in the Getting Started section of the Gradle plugin’s reference guide. 4.3.2. Installing the Spring Boot CLIThe Spring Boot CLI (Command Line Interface) is a command line tool that you can use to quickly prototype with Spring. It lets you run Groovy scripts, which means that you have a familiar Java-like syntax without so much boilerplate code. You do not need to use the CLI to work with Spring Boot, but it is a quick way to get a Spring application off the ground without an IDE. Manual InstallationYou can download the Spring CLI distribution from the Spring software repository:
Once downloaded, follow the INSTALL.txt instructions from the unpacked archive. In summary, there is a Installation with SDKMAN!SDKMAN! (The Software Development Kit Manager) can be used for managing multiple versions of various binary SDKs, including Groovy and the Spring Boot CLI. Get SDKMAN! from sdkman.io and install Spring Boot by using the following commands:
If you develop features for the CLI and want access to the version you built, use the following commands:
The preceding instructions install a local instance of You can see it by running the following command:
OSX Homebrew InstallationIf you are on a Mac and use Homebrew, you can install the Spring Boot CLI by using the following commands:
Homebrew installs
MacPorts InstallationIf you are on a Mac and use MacPorts, you can install the Spring Boot CLI by using the following command:
Command-line CompletionThe Spring Boot CLI includes scripts that provide command completion for the BASH and zsh shells. You can
Windows Scoop InstallationIf you are on a Windows and use Scoop, you can install the Spring Boot CLI by using the following commands: > scoop bucket add extras > scoop install springboot Scoop installs
Quick-start Spring CLI ExampleYou can use the following web application to test your installation. To start, create a file called
Then run it from a shell, as follows:
Open 4.4. Developing Your First Spring Boot ApplicationThis section describes how to develop a small “Hello World!” web application that highlights some of Spring Boot’s key features. We use Maven to build this project, since most IDEs support it.
Before we begin, open a terminal and run the following commands to ensure that you have valid versions of Java and Maven installed:
4.4.1. Creating the POMWe need to start by creating a Maven
The preceding listing should give you a working build. You can
test it by running
4.4.2. Adding Classpath DependenciesSpring Boot provides a number of “Starters” that let you add jars to your classpath. Our applications for smoke tests use the Other “Starters” provide dependencies that you are likely to need when developing a specific type of application. Since we are developing a web application, we add a
The
If you run 4.4.3. Writing the CodeTo finish our application, we need to create a single Java file. By default, Maven compiles sources from Java
Kotlin
Although there is not much code here, quite a lot is going on. We step through the important parts in the next few sections. The @RestController and @RequestMapping AnnotationsThe first annotation on our
The
The @EnableAutoConfiguration AnnotationThe second class-level annotation is The “main” MethodThe final part of our application is the 4.4.4. Running the ExampleAt this point, your application should work. Since you used the
If you open a web browser to To gracefully exit the application, press 4.4.5. Creating an Executable JarWe finish our example by creating a completely self-contained executable jar file that we could run in production. Executable jars (sometimes called “fat jars”) are archives containing your compiled classes along with all of the jar dependencies that your code needs to run. To create an executable jar, we need to add the
Save your
If you look in the
You should also see a much smaller file named To run
that application, use the
As before, to exit the application, press 4.5. What to Read NextHopefully, this section provided some of the Spring Boot basics and got you on your way to writing your own applications. If you are a task-oriented type of developer, you might want to jump over to spring.io and follow some of the getting started guides that solve specific “How do I do that with Spring?” problems. We also have Spring Boot-specific “How-to” reference documentation. 5. Upgrading Spring BootInstructions for how to upgrade from earlier versions of Spring Boot are provided on the project wiki. Follow the links in the release notes section to find the version that you want to upgrade to. Upgrading instructions are always the first item in the release notes. If you are more than one release behind, please make sure that you also review the release notes of the versions that you jumped. 5.1. Upgrading From 1.xIf you are upgrading from the 5.2. Upgrading to a New Feature ReleaseWhen upgrading to a new feature release, some properties may have been renamed or removed. Spring Boot provides a way to analyze your application’s environment and print diagnostics at startup, but also temporarily migrate properties at runtime for you. To enable that feature, add the following dependency to your project:
5.3. Upgrading the Spring Boot CLITo upgrade an existing CLI installation, use the appropriate package manager command (for example, 5.4. What to Read NextOnce you’ve decided to upgrade your application, you can find detailed information regarding specific features in the rest of the document. Spring Boot’s documentation is specific to that version, so any information that you find in here will contain the most up-to-date changes that are in that version. 6. Developing with Spring BootThis section goes into more detail about how you should use Spring Boot. It covers topics such as build systems, auto-configuration, and how to run your applications. We also cover some Spring Boot best practices. Although there is nothing particularly special about Spring Boot (it is just another library that you can consume), there are a few recommendations that, when followed, make your development process a little easier. If you are starting out with Spring Boot, you should probably read the Getting Started guide before diving into this section. 6.1. Build SystemsIt is strongly recommended that you choose a build system that supports dependency management and that can consume artifacts published to the “Maven Central” repository. We would recommend that you choose Maven or Gradle. It is possible to get Spring Boot to work with other build systems (Ant, for example), but they are not particularly well supported. 6.1.1. Dependency ManagementEach release of Spring Boot provides a curated list of dependencies that it supports. In practice, you do not need to provide a version for any of these dependencies in your build configuration, as Spring Boot manages that for you. When you upgrade Spring Boot itself, these dependencies are upgraded as well in a consistent way.
The curated list contains all the Spring modules that you can use with Spring Boot as well as a refined list of third party libraries. The list is available as a standard Bills of Materials (
6.1.2. MavenTo learn about using Spring Boot with Maven, see the documentation for Spring Boot’s Maven plugin:
6.1.3. GradleTo learn about using Spring Boot with Gradle, see the documentation for Spring Boot’s Gradle plugin:
6.1.4. AntIt is possible to build a Spring Boot project using Apache Ant+Ivy. The To declare dependencies, a typical
A typical
6.1.5. StartersStarters are a set of convenient dependency descriptors that you can include in your application. You get a one-stop shop for all the Spring and related technologies that you need without having to hunt through sample code and copy-paste loads of
dependency descriptors. For example, if you want to get started using Spring and JPA for database access, include the The starters contain a lot of the dependencies that you need to get a project up and running quickly and with a consistent, supported set of managed transitive dependencies. The following application starters are provided by Spring Boot under the
In addition to the application starters, the following starters can be used to add production ready features: Table 2. Spring Boot production starters
Finally, Spring Boot also includes the following starters that can be used if you want to exclude or swap specific technical facets: Table 3. Spring Boot technical starters
6.2. Structuring Your CodeSpring Boot does not require any specific code layout to work. However, there are some best practices that help. 6.2.1. Using the “default” PackageWhen a class does not include a
6.2.2. Locating the Main Application ClassWe generally recommend that you locate your main application class in a root package above other classes. The
The following listing shows a typical layout: com +- example +- myapplication +- MyApplication.java | +- customer | +- Customer.java | +- CustomerController.java | +- CustomerService.java | +- CustomerRepository.java | +- order +- Order.java +- OrderController.java +- OrderService.java +- OrderRepository.java The Java
Kotlin
6.3. Configuration ClassesSpring
Boot favors Java-based configuration. Although it is possible to use
6.3.1. Importing Additional Configuration ClassesYou need not put all your 6.3.2. Importing XML ConfigurationIf you absolutely must use XML based configuration, we recommend that you still start with a 6.4. Auto-configurationSpring Boot auto-configuration attempts to automatically configure your Spring application based on the jar dependencies that you have added. For example, if You need to opt-in to auto-configuration by adding the
6.4.1. Gradually Replacing Auto-configurationAuto-configuration is non-invasive. At any point, you can start to define your own configuration to replace specific parts of the auto-configuration. For example, if you add your own If you need to find out what auto-configuration is currently being applied, and why, start your application with the 6.4.2. Disabling Specific Auto-configuration ClassesIf you find that specific auto-configuration classes that you do not want are being applied, you can use the exclude attribute of Java
Kotlin
If the class is not on the classpath, you can use the
6.5. Spring Beans and Dependency InjectionYou are free to use any of the standard Spring Framework techniques to define your beans and their injected dependencies. We generally recommend using constructor injection to wire up dependencies and If you structure your code as suggested above (locating your application class in a top package), you can add The following example shows a Java
Kotlin
If a bean has more than one constructor, you will need to mark the one you want Spring to use with Java
Kotlin
6.6. Using the @SpringBootApplication AnnotationMany Spring Boot developers like their apps to use auto-configuration, component scan and be able to define extra configuration on their "application class". A single
Java
Kotlin
6.7. Running Your ApplicationOne of the biggest advantages of packaging your application as a jar and using an embedded HTTP server is that you can run your application as you would any other. The sample applies to debugging Spring Boot applications. You do not need any special IDE plugins or extensions.
6.7.1. Running From an IDEYou can run a Spring Boot application from your IDE as a Java application. However, you first need to import your project. Import steps vary depending on your IDE and build system. Most IDEs can import Maven projects directly. For example,
Eclipse users can select If you cannot directly import your project into your IDE, you may be able to generate IDE metadata by using a build plugin. Maven includes plugins for Eclipse and IDEA. Gradle offers plugins for various IDEs.
6.7.2. Running as a Packaged ApplicationIf you use the Spring Boot Maven or Gradle plugins to create an executable jar, you can run your application using
It is also possible to run a packaged application with remote debugging support enabled. Doing so lets you attach a debugger to your packaged application, as shown in the following example:
6.7.3. Using the Maven PluginThe Spring Boot Maven plugin includes
a You might also want to use the
6.7.4. Using the Gradle PluginThe Spring Boot Gradle plugin also includes a You might also want to use the $ export JAVA_OPTS=-Xmx1024m 6.7.5. Hot SwappingSince Spring Boot applications are plain Java applications, JVM hot-swapping should work out of the box. JVM hot swapping is somewhat limited with the bytecode that it can replace. For a more complete solution, JRebel can be used. The 6.8. Developer ToolsSpring Boot includes an additional set of tools that can make the application development experience a little more pleasant. The
Maven
Gradle
6.8.1. Diagnosing Classloading IssuesAs described in the Restart vs Reload section, restart functionality is implemented by using two classloaders. For most applications, this approach works well. However, it can sometimes cause classloading issues, in particular in multi-module projects. To diagnose whether the classloading issues are indeed caused by devtools and its two classloaders, try disabling restart. If this solves your problems, customize the restart classloader to include your entire project. 6.8.2. Property DefaultsSeveral of the libraries supported by Spring Boot use caches to improve performance. For example, template engines cache compiled templates to avoid repeatedly parsing template files. Also, Spring MVC can add HTTP caching headers to responses when serving static resources. While caching is very beneficial in production, it can be counter-productive during development, preventing you from seeing the changes you just made in your application. For this reason, spring-boot-devtools disables the caching options by default. Cache options are usually configured by settings in your The following table lists all the properties that are applied:
Because you need more information about web requests while developing Spring MVC and Spring WebFlux applications, developer tools suggests you to enable 6.8.3. Automatic RestartApplications that use
Logging Changes in Condition EvaluationBy default, each time your application restarts, a report showing the condition evaluation delta is logged. The report shows the changes to your application’s auto-configuration as you make changes such as adding or removing beans and setting configuration properties. To disable the logging of the report, set the following property: Properties
Yaml
Excluding ResourcesCertain resources do not necessarily need to trigger
a restart when they are changed. For example, Thymeleaf templates can be edited in-place. By default, changing resources in Properties
Yaml
Watching Additional PathsYou may want your application to be restarted or reloaded when you make changes to files that are not on the classpath. To do so, use the Disabling RestartIf you do not want to use the restart feature, you can disable it by using the If you need to completely disable restart support (for example, because it does
not work with a specific library), you need to set the Java
Kotlin
Using a Trigger FileIf you work with an IDE that continuously compiles changed files, you might prefer to trigger restarts only at specific times. To do so, you can use a “trigger file”, which is a special file that must be modified when you want to actually trigger a restart check.
To use a trigger file, set the For example, if you have a project with the following structure: src +- main +- resources +- .reloadtrigger Then your Properties
Yaml
Restarts will now only happen when the
Some IDEs have features that save you from needing to update your trigger file manually. Spring Tools for Eclipse and IntelliJ IDEA (Ultimate Edition) both have such support. With Spring Tools, you can use the “reload” button from the console view (as long as your Customizing the Restart ClassloaderAs described earlier in the Restart vs Reload section, restart functionality is implemented by using two classloaders. If this causes issues, you might need to customize what gets loaded by which classloader. By default, any open project in your IDE is loaded with the “restart” classloader, and any regular You can instruct Spring Boot to load parts of your project with a different classloader by creating a Properties
Yaml
Known LimitationsRestart functionality does not work well with objects that are deserialized by using a standard Unfortunately, several third-party libraries deserialize without considering the context classloader. If you find such a problem, you need to request a fix with the original authors. 6.8.4. LiveReloadThe If you do not want to start the LiveReload server when your application runs, you can set the
6.8.5. Global SettingsYou can configure global devtools settings by adding any of the following files to the
Any properties added to these files apply to all
Spring Boot applications on your machine that use devtools. For example, to configure restart to always use a trigger file, you would add the following property to your Properties
Yaml
By default,
Configuring File System WatcherFileSystemWatcher works by polling the class changes with a certain time interval, and then waiting for a predefined quiet period to make sure there are no more changes. Since Spring Boot relies entirely on the IDE to compile and copy files into the location from where Spring Boot can read
them, you might find that there are times when certain changes are not reflected when devtools restarts the application. If you observe such problems constantly, try increasing the Properties
Yaml
The monitored classpath directories are now polled every 2 seconds for changes, and a 1 second quiet period is maintained to make sure there are no additional class changes. 6.8.6. Remote ApplicationsThe Spring Boot developer tools are not limited to local development. You can also use several features when running applications remotely. Remote support is opt-in as enabling it can be a security risk. It should only be enabled when running on a trusted network or when secured with SSL. If neither of these options is available to you, you should not use DevTools' remote support. You should never enable support on a production deployment. To enable it, you need to make sure that
Then you need to set the Remote devtools support is provided in two parts: a server-side endpoint that accepts connections and a client application that you run in your IDE. The server component is automatically enabled when the
Running the Remote Client ApplicationThe remote client application is designed to be run from within your IDE. You need to run For example, if you are using Eclipse or Spring Tools and you have a project named
A running remote client might resemble the following listing: . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ ___ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | | _ \___ _ __ ___| |_ ___ \ \ \ \ \\/ ___)| |_)| | | | | || (_| []::::::[] / -_) ' \/ _ \ _/ -_) ) ) ) ) ' |____| .__|_| |_|_| |_\__, | |_|_\___|_|_|_\___/\__\___|/ / / / =========|_|==============|___/===================================/_/_/_/ :: Spring Boot Remote :: (v2.7.5) 2022-10-20 12:40:15.175 INFO 16215 --- [ main] o.s.b.devtools.RemoteSpringApplication : Starting RemoteSpringApplication v2.7.5 using Java 1.8.0_345 on myhost with PID 16215 (/Users/myuser/.m2/repository/org/springframework/boot/spring-boot-devtools/2.7.5/spring-boot-devtools-2.7.5.jar started by myuser in /opt/apps/) 2022-10-20 12:40:15.182 INFO 16215 --- [ main] o.s.b.devtools.RemoteSpringApplication : No active profile set, falling back to 1 default profile: "default" 2022-10-20 12:40:15.913 INFO 16215 --- [ main] o.s.b.d.a.OptionalLiveReloadServer : LiveReload server is running on port 35729 2022-10-20 12:40:15.946 INFO 16215 --- [ main] o.s.b.devtools.RemoteSpringApplication : Started RemoteSpringApplication in 1.471 seconds (JVM running for 2.063)
Remote UpdateThe remote client monitors your application classpath for changes in the same way as the local restart. Any updated resource is pushed to the remote application and (if required) triggers a restart. This can be helpful if you iterate on a feature that uses a cloud service that you do not have locally. Generally, remote updates and restarts are much quicker than a full rebuild and deploy cycle. On a slower development environment, it may happen that the quiet period is not enough, and the changes in the classes may be split into batches. The server is restarted after the first batch of class changes is uploaded. The next batch can’t be sent to the application, since the server is restarting. This is typically manifested by a warning in the
6.9. Packaging Your Application for ProductionExecutable jars can be used for production deployment. As they are self-contained, they are also ideally suited for cloud-based deployment. For additional “production ready” features, such as health, auditing, and metric
REST or JMX end-points, consider adding 6.10. What to Read NextYou should now understand how you can use Spring Boot and some best practices that you should follow. You can now go on to learn about specific Spring Boot features in depth, or you could skip ahead and read about the “production ready” aspects of Spring Boot. 7. Core FeaturesThis section dives into the details of Spring Boot. Here you can learn about the key features that you may want to use and customize. If you have not already done so, you might want to read the "Getting Started" and "Developing with Spring Boot" sections, so that you have a good grounding of the basics. 7.1. SpringApplicationThe Java
Kotlin
When your application starts, you should see something similar to the following output: . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v2.7.5) 2022-10-20 12:40:17.841 INFO 16284 --- [ main] o.s.b.d.f.s.MyApplication : Starting MyApplication using Java 1.8.0_345 on myhost with PID 16284 (/opt/apps/myapp.jar started by myuser in /opt/apps/) 2022-10-20 12:40:17.849 INFO 16284 --- [ main] o.s.b.d.f.s.MyApplication : No active profile set, falling back to 1 default profile: "default" 2022-10-20 12:40:20.443 INFO 16284 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http) 2022-10-20 12:40:20.455 INFO 16284 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat] 2022-10-20 12:40:20.455 INFO 16284 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.68] 2022-10-20 12:40:20.716 INFO 16284 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext 2022-10-20 12:40:20.716 INFO 16284 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 2566 ms 2022-10-20 12:40:22.045 INFO 16284 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path '' 2022-10-20 12:40:22.073 INFO 16284 --- [ main] o.s.b.d.f.s.MyApplication : Started MyApplication in 4.937 seconds (JVM running for 6.049) By default,
7.1.1. Startup FailureIf your application fails to start, registered *************************** APPLICATION FAILED TO START *************************** Description: Embedded servlet container failed to start. Port 8080 was already in use. Action: Identify and stop the process that is listening on port 8080 or configure this application to listen on another port.
If no failure analyzers are able to handle the exception, you can still display the full conditions report to better understand what went wrong. To do so, you need to enable the For instance, if you are running your application by using
7.1.2. Lazy Initialization
A downside of lazy initialization is that it can delay the discovery of a problem with the application. If a misconfigured bean is initialized lazily, a failure will no longer occur during startup and the problem will only become apparent when the bean is initialized. Care must also be taken to ensure that the JVM has sufficient memory to accommodate all of the application’s beans and not just those that are initialized during startup. For these reasons, lazy initialization is not enabled by default and it is recommended that fine-tuning of the JVM’s heap size is done before enabling lazy initialization. Lazy
initialization can be enabled programmatically using the Properties
Yaml
The banner that is printed on start up can be changed by adding a Inside your
You can also use the The printed banner is registered as a singleton bean under the following name:
7.1.4. Customizing SpringApplicationIf the Java
Kotlin
It is also possible to configure the 7.1.5. Fluent Builder APIIf you need to build an The Java
Kotlin
7.1.6. Application AvailabilityWhen deployed on platforms, applications can provide information about their availability to the platform using infrastructure such as Kubernetes Probes. Spring Boot includes out-of-the box support for the commonly used “liveness” and “readiness” availability states. If you are using Spring Boot’s “actuator” support then these states are exposed as health endpoint groups. In addition, you can also obtain availability states by injecting the Liveness StateThe “Liveness” state of an application tells whether its internal state allows it to work correctly, or recover by itself if it is currently failing. A broken “Liveness” state means that the application is in a state that it cannot recover from, and the infrastructure should restart the application.
The internal state of Spring Boot applications is mostly represented by the Spring Readiness StateThe “Readiness” state of an application tells whether the application is ready to handle traffic. A failing “Readiness” state tells the platform that it should not route traffic to
the application for now. This typically happens during startup, while
Managing the Application Availability StateApplication components can retrieve the current availability state at any time, by injecting the For example, we can export the "Readiness" state of the application to a file so that a Kubernetes "exec Probe" can look at this file: Java
Kotlin
We can also update the state of the application, when the application breaks and cannot recover: Java
Kotlin
7.1.7. Application Events and ListenersIn addition to the usual Spring Framework events, such as
Application events are sent in the following order, as your application runs:
The above list only includes
Application events are sent by using Spring Framework’s event publishing mechanism. Part of this mechanism ensures that an event published to the listeners in a child context is also published to the listeners in any ancestor contexts. As a result of this, if your application uses a hierarchy of To allow your listener to distinguish between an event for its
context and an event for a descendant context, it should request that its application context is injected and then compare the injected context with the context of the event. The context can be injected by implementing 7.1.8. Web EnvironmentA
This means that if you are using Spring MVC and the new It is also possible to take complete control of the
7.1.9. Accessing Application ArgumentsIf you need to access the application arguments that were passed to Java
Kotlin
7.1.10. Using the ApplicationRunner or CommandLineRunnerIf you need to run some specific code once the
The Java
Kotlin
If several 7.1.11. Application ExitEach In addition,
beans may implement the Java
Kotlin
Also, the If
there is more than 7.1.12. Admin FeaturesIt is possible to enable admin-related features for the
application by specifying the
7.1.13. Application Startup trackingDuring the application startup, the You can choose an Java
Kotlin
The first available implementation,
Spring Boot ships with the Spring Boot can also be configured to expose a
7.2. Externalized ConfigurationSpring Boot lets you externalize your configuration so that you can work with the same application code in different environments. You can use a variety of external configuration sources, include Java properties files, YAML files, environment variables, and command-line arguments. Property values can be injected directly into your beans by using the Spring Boot uses a very particular
Config data files are considered in the following order:
To provide a concrete example, suppose you develop a Java
Kotlin
On your application classpath (for example, inside your jar) you can have an
7.2.1. Accessing Command Line PropertiesBy default, If you do not want command line properties to be added to the 7.2.2. JSON Application PropertiesEnvironment variables and system properties often have restrictions that mean some property names cannot be used. To help with this, Spring Boot allows you to encode a block of properties into a single JSON structure. When your application starts, any For example, the
In the preceding example, you end up with The same JSON can also be provided as a system property:
Or you could supply the JSON by using a command line argument:
If you are deploying to a classic Application Server, you could also use a JNDI variable named
7.2.3. External Application PropertiesSpring Boot will automatically find and load
The list is ordered by precedence (with values from lower items overriding earlier ones). Documents from the loaded files are added as If you do not
like
You can also refer to an explicit location by using the The following example shows how to specify two distinct files:
If
In most situations, each If you have a complex location setup, and you use profile-specific configuration files, you may need to provide further hints so that Spring Boot knows how they should be grouped. A location group is a collection of locations that are all considered at the same
level. For example, you might want to group all classpath locations, then all external locations. Items within a location group should be separated with Locations configured by using
If you prefer to add additional locations, rather than replacing them, you can use
This search ordering lets you specify default values in one configuration file and then selectively override those values in another. You can provide default values for your application in
Optional LocationsBy default, when a specified config data location does not exist, Spring Boot will throw a If you want to specify a location, but you do not mind if it does not always exist, you can use the
For example, a If you want to ignore all Wildcard LocationsIf a config file location includes the For example, if you have some Redis configuration and some MySQL configuration, you might want to keep those two pieces of configuration separate, while requiring that both those are present in an By default, Spring Boot includes You can use wildcard locations yourself with the
Profile Specific FilesAs well as Profile-specific properties are loaded from the same locations as standard
The
Importing Additional DataApplication properties may import further config data from other locations using the For example, you might have the following in your classpath Properties
Yaml
This will trigger the import of a An import will only be imported once no matter how many times it is declared. The order an import is defined inside a single document within the properties/yaml file does not matter. For instance, the two examples below produce the same result: Properties
Yaml
Properties
Yaml
In both of the above examples, the values from the Several locations can be specified under a single
Importing Extensionless FilesSome cloud platforms cannot add a file extension to volume mounted files. To import these extensionless files, you need to give Spring Boot a hint so that it knows how to load them. You can do this by putting an extension hint in square brackets. For example, suppose you have a Properties
Yaml
Using Configuration TreesWhen running applications on a cloud platform (such as Kubernetes) you often need to read config values that the platform supplies. It is not uncommon to use environment variables for such purposes, but this can have drawbacks, especially if the value is supposed to be kept secret. As an alternative to environment variables, many cloud platforms now allow you to map configuration into mounted data volumes. For example, Kubernetes can volume mount both
There are two common volume mount patterns that can be used:
For the first case, you can import the YAML or Properties file directly using As an example, let’s imagine that Kubernetes has mounted the following volume: etc/ config/ myapp/ username password The contents of the To import these properties, you can add the following to your Properties
Yaml
You can then access or inject
If you have multiple config trees to import from the same parent folder you can use a wildcard shortcut. Any For example, given the following volume: etc/ config/ dbconfig/ db/ username password mqconfig/ mq/ username password You can use Properties
Yaml
This will add
Configuration trees can also be used for Docker secrets. When a Docker swarm service is granted access to a secret, the secret gets mounted into the container. For example, if a secret named Properties
Yaml
Property PlaceholdersThe values in The use of placeholders with and without defaults is shown in the following example: Properties
Yaml
Assuming that the
Working With Multi-Document FilesSpring Boot allows you to split a single physical file into multiple logical documents which are each added independently. Documents are processed in order, from top to bottom. Later documents can override the properties defined in earlier ones. For For example, the following file has two logical documents:
For
Activation PropertiesIt is sometimes useful to only activate a given set of properties when certain conditions are met. For example, you might have properties that are only relevant when a specific profile is active. You can
conditionally activate a properties document using The following activation properties are available: Table 5. activation properties
For example, the following specifies that the second document is only active when running on Kubernetes, and only when either the “prod” or “staging” profiles are active: Properties
Yaml
7.2.4. Encrypting PropertiesSpring Boot does not provide any built in support for encrypting property values, however, it does provide the hook points necessary to modify values contained in the Spring If you need a secure way to store credentials and passwords, the Spring Cloud Vault project provides support for storing externalized configuration in HashiCorp Vault. 7.2.5. Working With YAMLYAML is a superset of JSON and, as such, is a convenient format for specifying hierarchical configuration data. The
Mapping YAML to PropertiesYAML documents need to be converted from their hierarchical format to a flat structure that can be used with the Spring
In order to access
these properties from the
Likewise, YAML lists also need to be flattened. They are represented as property keys with
The preceding example would be transformed into these properties:
Directly Loading YAMLSpring Framework provides two convenient classes that can be used to load YAML documents. The You can also use the 7.2.6. Configuring Random ValuesThe Properties
Yaml
The 7.2.7. Configuring System Environment PropertiesSpring Boot supports setting a prefix for environment properties. This is useful if the system environment is shared by multiple Spring Boot applications with different configuration requirements. The prefix for system environment properties can be set directly on For example, if you set the prefix to 7.2.8. Type-safe Configuration PropertiesUsing the JavaBean Properties BindingIt is possible to bind a bean declaring standard JavaBean properties as shown in the following example: Java
Kotlin
The preceding POJO defines the following properties:
Constructor BindingThe example in the previous section can be rewritten in an immutable fashion as shown in the following example: Java
Kotlin
In this setup, the Nested members of a Default values can be specified using Referring to the previous example, if no properties are bound to Java
Kotlin
Enabling @ConfigurationProperties-annotated TypesSpring Boot provides infrastructure to bind Sometimes, classes annotated with Java
Kotlin
To use configuration property scanning, add the Java
Kotlin
We recommend that Using @ConfigurationProperties-annotated TypesThis style of configuration works particularly well with the
To work with Java
Kotlin
Third-party ConfigurationAs well as using To configure a bean from the Java
Kotlin
Any JavaBean property defined with the Relaxed BindingSpring Boot uses some relaxed rules for binding As an example, consider the following Java
Kotlin
With the preceding code, the following properties names can all be used: Table 6. relaxed binding
Binding Maps When binding to For example, consider binding the following properties to a Properties
Yaml
The properties above will bind to a When binding to scalar values, keys with Binding From Environment Variables Most operating systems impose strict rules around the names that can be used for environment variables. For example, Linux shell variables can contain only letters ( Spring Boot’s relaxed binding rules are, as much as possible, designed to be compatible with these naming restrictions. To convert a property name in the canonical-form to an environment variable name you can follow these rules:
For example, the configuration property Environment variables can also be used when binding to object lists. To bind to a For example, the configuration property Merging Complex TypesWhen lists are configured in more than one place, overriding works by replacing the entire list. For example, assume a Java
Kotlin
Consider the following configuration: Properties
Yaml
If the When a Properties
Yaml
In the preceding example, if the For Java
Kotlin
Consider the following configuration: Properties
Yaml
If the
Properties ConversionSpring Boot attempts to coerce the external application properties to the right type when it binds to the
Converting Durations Spring Boot has dedicated support for expressing durations. If you expose a
Consider the following example: Java
Kotlin
To specify a session timeout of 30 seconds, You can also use any of the supported units. These are:
The default unit is milliseconds and can be overridden using If you prefer to use constructor binding, the same properties can be exposed, as shown in the following example: Java
Kotlin
Converting Periods In addition to durations, Spring Boot can also work with
The following units are supported with the simple format:
Converting Data Sizes Spring Framework has a
Consider the following example: Java
Kotlin
To specify a buffer size of 10 megabytes, You can also use any of the supported units. These are:
The default unit is bytes and can be overridden using If you prefer to use constructor binding, the same properties can be exposed, as shown in the following example: Java
Kotlin
@ConfigurationProperties ValidationSpring Boot attempts to validate Java
Kotlin
To ensure that validation is always triggered for nested properties, even when no properties are found, the associated field must be annotated with Java
Kotlin
You can also add a custom Spring
@ConfigurationProperties vs. @ValueThe
If you define a set of configuration keys for your own components, we recommend you group them in a POJO annotated with
7.3. ProfilesSpring Profiles provide a way to segregate parts of your application
configuration and make it be available only in certain environments. Any Java
Kotlin
You can use a Properties
Yaml
You could also specify it on the command line by using the following switch: If no profile is active, a default
profile is enabled. The name of the default profile is Properties
Yaml
For example, the second document configuration is invalid: Properties
Yaml
7.3.1. Adding Active ProfilesThe Sometimes, it
is useful to have properties that add to the active profiles rather than replace them. The For example, when an application with the following properties is run, the common and local profiles will be activated even when it runs using the --spring.profiles.active switch: Properties
Yaml
Profile groups, which are described in the next section can also be used to add active profiles if a given profile is active. 7.3.2. Profile GroupsOccasionally the
profiles that you define and use in your application are too fine-grained and become cumbersome to use. For example, you might have To help with this, Spring Boot lets you define profile groups. A profile group allows you to define a logical name for a related group of profiles. For example, we can create a
Properties
Yaml
Our application can now be started using 7.3.3. Programmatically Setting ProfilesYou can programmatically
set active profiles by calling 7.3.4. Profile-specific Configuration FilesProfile-specific variants of both 7.4. LoggingSpring Boot uses Commons Logging for all internal logging but leaves the underlying log implementation open. Default configurations are provided for Java Util Logging, Log4J2, and Logback. In each case, loggers are pre-configured to use console output with optional file output also available. By default, if you use the “Starters”, Logback is used for logging. Appropriate Logback routing is also included to ensure that dependent libraries that use Java Util Logging, Commons Logging, Log4J, or SLF4J all work correctly.
7.4.1. Log FormatThe default log output from Spring Boot resembles the following example: 2022-10-20 12:40:11.311 INFO 16138 --- [ main] o.s.b.d.f.s.MyApplication : Starting MyApplication using Java 1.8.0_345 on myhost with PID 16138 (/opt/apps/myapp.jar started by myuser in /opt/apps/) 2022-10-20 12:40:11.330 INFO 16138 --- [ main] o.s.b.d.f.s.MyApplication : No active profile set, falling back to 1 default profile: "default" 2022-10-20 12:40:13.056 INFO 16138 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http) 2022-10-20 12:40:13.070 INFO 16138 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat] 2022-10-20 12:40:13.070 INFO 16138 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.68] 2022-10-20 12:40:13.178 INFO 16138 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext 2022-10-20 12:40:13.178 INFO 16138 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 1762 ms 2022-10-20 12:40:13.840 INFO 16138 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path '' 2022-10-20 12:40:13.850 INFO 16138 --- [ main] o.s.b.d.f.s.MyApplication : Started MyApplication in 4.062 seconds (JVM running for 5.452) The following items are output:
7.4.2. Console OutputThe default log configuration echoes messages to the console as they are written. By default,
When the debug mode is enabled, a selection of core loggers (embedded container, Hibernate, and Spring Boot) are configured to output more information. Enabling the debug mode does not configure your application to log all messages with Alternatively, you can enable a “trace” mode by starting your application with a Color-coded OutputIf your terminal supports ANSI, color output is used to aid readability. You can set Color coding is configured by using the The following table describes the mapping of log levels to colors:
Alternatively, you can specify the color or style that should be used by providing it as an option to the conversion. For example, to make the text yellow, use the following setting:
The following colors and styles are supported:
7.4.3. File OutputBy default, Spring Boot logs only to the console and does not write log files. If you want to write log files in addition to the console output, you need to set a The following table shows how the
Log files rotate when they reach 10 MB and, as with console output,
7.4.4. File RotationIf you are using the Logback, it is possible to fine-tune log rotation settings using your The following rotation policy properties are supported:
7.4.5. Log LevelsAll the supported logging systems can have the logger levels set in the Spring The
following example shows potential logging settings in Properties
Yaml
It is also possible to set logging levels using environment variables. For example,
7.4.6. Log GroupsIt is often useful to be able to group related loggers together so that they can all be configured at the same time. For example, you might commonly change the logging levels for all Tomcat related loggers, but you can not easily remember top level packages. To
help with this, Spring Boot allows you to define logging groups in your Spring Properties
Yaml
Once defined, you can change the level for all the loggers in the group with a single line: Properties
Yaml
Spring Boot includes the following pre-defined logging groups that can be used out-of-the-box:
7.4.7. Using a Log Shutdown HookIn order to release logging resources when your application terminates, a shutdown hook that will trigger log system cleanup when the JVM exits is provided. This shutdown hook is registered automatically unless your application is deployed as a war
file. If your application has complex context hierarchies the shutdown hook may not meet your needs. If it does not, disable the shutdown hook and investigate the options provided directly by the underlying logging system. For example, Logback offers context selectors which allow each Logger to be created in its own context. You can use the Properties
Yaml
7.4.8. Custom Log ConfigurationThe various logging systems can be activated by including the appropriate
libraries on the classpath and can be further customized by providing a suitable configuration file in the root of the classpath or in a location specified by the following Spring You can force Spring Boot to use a particular logging system by using the
Depending on your logging system, the following files are loaded:
To help with the customization, some other properties are transferred from the Spring
If you use Logback, the following properties are also transferred:
All the supported logging systems can consult System properties when parsing their configuration files. See the default configurations in
7.4.9. Logback ExtensionsSpring Boot includes a number of extensions to Logback that can help with advanced configuration. You can use these extensions in your
ERROR in [email protected]:71 - no applicable action for [springProperty], current ElementPath is [[configuration][springProperty]] ERROR in [email protected]:71 - no applicable action for [springProfile], current ElementPath is [[configuration][springProfile]] Profile-specific ConfigurationThe
Environment PropertiesThe
7.5. InternationalizationSpring Boot supports localized messages so that your application can cater to users of different language preferences. By default, Spring Boot looks for the presence of a
The basename of the resource bundle as well as several other attributes can be configured using the Properties
Yaml
7.6. JSONSpring Boot provides integration with three JSON mapping libraries:
Jackson is the preferred and default library. 7.6.1. JacksonAuto-configuration for Jackson is provided and Jackson is part of Custom Serializers and DeserializersIf you use Jackson to
serialize and deserialize JSON data, you might want to write your own You can use the Java
Kotlin
All The example above can be rewritten to use Java
Kotlin
MixinsJackson has support for mixins that can be used to mix additional annotations into those already declared on a target class. Spring Boot’s Jackson auto-configuration will scan your application’s packages for classes annotated with 7.6.2. GsonAuto-configuration for Gson is provided. When Gson is on the classpath a 7.6.3. JSON-BAuto-configuration for JSON-B is provided. When the JSON-B API and an implementation are on the classpath a 7.7. Task Execution and SchedulingIn the absence of an
The thread pool uses 8 core threads that can grow and shrink according to the load. Those default settings can be fine-tuned using the Properties
Yaml
This changes the thread pool to use a bounded queue so that when the queue is full (100 tasks), the thread pool increases to maximum 16 threads. Shrinking of the pool is more aggressive as threads are reclaimed when they are idle for 10 seconds (rather than 60 seconds by default). A Properties
Yaml
Both a 7.8. TestingSpring Boot provides a number of utilities and annotations to help when testing your application. Test support is provided by two modules: Most
developers use the
7.8.1. Test Scope DependenciesThe
We generally find these common libraries to be useful when writing tests. If these libraries do not suit your needs, you can add additional test dependencies of your own. 7.8.2. Testing Spring ApplicationsOne of the major advantages of dependency injection is that it should make
your code easier to unit test. You can instantiate objects by using the Often, you need to move beyond unit testing and start integration testing (with a Spring The Spring Framework includes a dedicated test
module for such integration testing. You can declare a dependency directly to If you have not used the 7.8.3. Testing Spring Boot ApplicationsA Spring Boot application is a Spring
By default,
Detecting Web Application TypeIf Spring MVC is available, a regular MVC-based application context is configured. If you have only Spring WebFlux, we will detect that and configure a WebFlux-based application context instead. If both are present, Spring MVC takes precedence. If you want to test a reactive web application in this scenario, you must set the Java
Kotlin
Detecting Test ConfigurationIf you are familiar with the Spring Test Framework, you may be used to using When testing Spring Boot applications, this is often not required. Spring
Boot’s The search algorithm works up from the package that contains the test until it finds a class annotated with
If you want to customize the primary configuration, you can use a nested
Excluding Test ConfigurationIf your application uses component scanning (for example, if you use As we have seen earlier, Java
Kotlin
Using Application ArgumentsIf your application expects
arguments, you can have Java
Kotlin
Testing With a Mock EnvironmentBy default, With Spring MVC, we can query our web endpoints using
Java
Kotlin
With Spring WebFlux endpoints, you can use Java
Kotlin
Testing With a Running ServerIf you need to start a full running server, we recommend that you use random ports. If you use The Java
Kotlin
This setup requires Java
Kotlin
Customizing WebTestClientTo customize the Using JMXAs the test context framework caches context, JMX is disabled by default to prevent identical components to register on the same domain. If such test needs access to an Java
Kotlin
Using MetricsRegardless of your classpath, meter registries, except the in-memory backed, are not auto-configured when using If you need to export metrics to a different backend as part of an integration test, annotate it with Mocking and Spying BeansWhen running tests, it is sometimes necessary to mock certain components within your application context. For example, you may have a facade over some remote service that is unavailable during development. Mocking can also be useful when you want to simulate failures that might be hard to trigger in a real environment. Spring Boot includes a
The following example replaces an existing Java
Kotlin
Additionally, you can use
Auto-configured TestsSpring Boot’s auto-configuration system works well for applications but can sometimes be a little too much for tests. It often helps to load only the parts of the configuration that are required to test a “slice” of your application. For example, you might want to test that Spring MVC controllers are mapping URLs correctly, and you do not want to involve database calls in those tests, or you might want to test JPA entities, and you are not interested in the web layer when those tests run. The
Auto-configured JSON TestsTo test that object JSON serialization and deserialization is working as expected, you can use the
If you need to configure elements of the auto-configuration, you can use the Spring Boot includes AssertJ-based helpers that work with the JSONAssert and JsonPath libraries to check that JSON appears as expected. The Java
Kotlin
If you use Spring Boot’s AssertJ-based helpers to assert on a number value at a given JSON path, you might not be able to use Java
Kotlin
Auto-configured Spring MVC TestsTo test whether Spring MVC controllers are working as expected, use the
Often,
Java
Kotlin
If you use HtmlUnit and Selenium, auto-configuration also provides an HtmlUnit Java
Kotlin
If you have Spring Security on the classpath,
Auto-configured Spring WebFlux TestsTo test that Spring WebFlux controllers are working as expected, you can use the
Often,
Java
Kotlin
Auto-configured Spring GraphQL TestsSpring GraphQL offers a dedicated testing support module; you’ll need to add it to your project: Maven
Gradle
This
testing module ships the GraphQlTester. The tester is heavily used in test, so be sure to become familiar with using it. There are
Spring Boot helps you to test your Spring GraphQL Controllers with the
Often, Java
Kotlin
Java
Kotlin
Auto-configured Data Cassandra TestsYou can use
The following example shows a typical setup for using Cassandra tests in Spring Boot: Java
Kotlin
Auto-configured Data Couchbase TestsYou can use
The following example shows a typical setup for using Couchbase tests in Spring Boot: Java
Kotlin
Auto-configured Data Elasticsearch TestsYou can use
The following example shows a typical setup for using Elasticsearch tests in Spring Boot: Java
Kotlin
Auto-configured Data JPA TestsYou can use the Regular By default, data JPA tests are transactional and roll back at the end of each test. See the relevant section in the Spring Framework Reference Documentation for more details. If that is not what you want, you can disable transaction management for a test or for the whole class as follows: Java
Kotlin
Data JPA tests may also inject a
A Java
Kotlin
In-memory embedded databases generally work well for tests, since they are fast and do not require any installation. If, however, you prefer to run tests against a real database you can use the Java
Kotlin
Auto-configured JDBC Tests
By default, JDBC tests are transactional and roll back at the end of each test. See the relevant section in the Spring Framework Reference Documentation for more details. If that is not what you want, you can disable transaction management for a test or for the whole class, as follows: Java
Kotlin
If you prefer your test to run against a real database, you can use the Auto-configured Data JDBC Tests
By default, Data JDBC tests are transactional and roll back at the end of each test. See the relevant section in the Spring Framework Reference Documentation for more details. If that is not what you want, you can disable transaction management for a test or for the whole test class as shown in the JDBC example. If you prefer your test to run against a real database, you can use the Auto-configured jOOQ TestsYou can use
Java
Kotlin
JOOQ tests are transactional and roll back at the end of each test by default. If that is not what you want, you can disable transaction management for a test or for the whole test class as shown in the JDBC example. Auto-configured Data MongoDB TestsYou can use
The following class shows the Java
Kotlin
In-memory embedded MongoDB generally works well for tests, since it is fast and does not require any developer installation. If, however, you prefer to run tests against a real MongoDB server, you should exclude the embedded MongoDB auto-configuration, as shown in the following example: Java
Kotlin
Auto-configured Data Neo4j TestsYou can use
The following example shows a typical setup for using Neo4J tests in Spring Boot: Java
Kotlin
By default, Data Neo4j tests are transactional and roll back at the end of each test. See the relevant section in the Spring Framework Reference Documentation for more details. If that is not what you want, you can disable transaction management for a test or for the whole class, as follows: Java
Kotlin
Auto-configured Data Redis TestsYou can use
The following example shows the Java
Kotlin
Auto-configured Data LDAP TestsYou can use
The following example shows the Java
Kotlin
In-memory embedded LDAP generally works well for tests, since it is fast and does not require any developer installation. If, however, you prefer to run tests against a real LDAP server, you should exclude the embedded LDAP auto-configuration, as shown in the following example: Java
Kotlin
Auto-configured REST ClientsYou can use the
The specific beans that you want to test should be specified by using the Java
Kotlin
Auto-configured Spring REST Docs TestsYou can use the
Auto-configured Spring REST Docs Tests With Mock MVC
Java
Kotlin
If you require more control over Spring REST Docs configuration than offered by the attributes of Java
Kotlin
If you want to make use of Spring REST Docs support for a parameterized output directory, you can create a Java
Kotlin
Auto-configured Spring REST Docs Tests With WebTestClient
Java
Kotlin
If you require more control over Spring REST Docs configuration than offered by the attributes of Java
Kotlin
If you want to make use of Spring REST Docs support for a parameterized output directory, you can use a Java
Kotlin
Auto-configured Spring REST Docs Tests With REST Assured
Java
Kotlin
If you require more control over Spring REST Docs configuration than offered by the attributes of Java
Kotlin
Auto-configured Spring Web Services TestsAuto-configured Spring Web Services Client Tests You can use
The following example shows the Java
Kotlin
Auto-configured Spring Web Services Server Tests You
can use
The following example shows the Java
Kotlin
Additional Auto-configuration and SlicingEach slice provides one or more Java
Kotlin
Alternatively, additional auto-configurations can be added for any use of a slice annotation by registering them in a file stored in META-INF/spring/org.springframework.boot.test.autoconfigure.jdbc.JdbcTest.imports com.example.IntegrationAutoConfiguration In this example, the
User Configuration and SlicingIt then becomes important not to litter the application’s main class with configuration settings that are specific to a particular area of its functionality. Assume
that you are using Spring Batch and you rely on the auto-configuration for it. You could define your Java
Kotlin
Because this class is the source configuration for the test, any slice test actually tries to start Spring Batch, which is definitely not what you want to do. A recommended approach is to move that area-specific configuration to a separate Java
Kotlin
Test slices exclude Java
Kotlin
The configuration below will, however, cause the custom Java
Kotlin
Another source of confusion is classpath scanning. Assume that, while you structured your code in a sensible way, you need to scan an additional package. Your application may resemble the following code: Java
Kotlin
Doing so effectively overrides the default component scan directive with the side effect of scanning those two packages regardless of the slice that you chose. For instance, a
Using Spock to Test Spring Boot ApplicationsSpock 2.x can be used to test a Spring Boot application. To do so, add a dependency on Spock’s 7.8.4. Test UtilitiesA few test utility classes that are generally useful when testing your application are packaged as part of ConfigDataApplicationContextInitializer
Java
Kotlin
TestPropertyValues
Java
Kotlin
OutputCapture
Java
Kotlin
TestRestTemplate
It is recommended, but not mandatory, to use the Apache HTTP Client (version 4.3.2 or better). If you have that on your classpath, the
Java
Kotlin
Alternatively, if you use the Java
Kotlin
7.9. Creating Your Own Auto-configurationIf you work in a company that develops shared libraries, or if you work on an open-source or commercial library, you might want to develop your own auto-configuration. Auto-configuration classes can be bundled in external jars and still be picked-up by Spring Boot. Auto-configuration can be associated to a “starter” that provides the auto-configuration code as well as the typical libraries that you would use with it. We first cover what you need to know to build your own auto-configuration and then we move on to the typical steps required to create a custom starter. 7.9.1. Understanding Auto-configured BeansClasses that implement auto-configuration are annotated with 7.9.2. Locating Auto-configuration CandidatesSpring Boot checks for the presence of a com.mycorp.libx.autoconfigure.LibXAutoConfiguration com.mycorp.libx.autoconfigure.LibXWebAutoConfiguration
If your configuration needs to be applied in a specific order, you can use the If you want to order certain auto-configurations that should not have any direct knowledge of each other, you can also use As with standard 7.9.3. Condition AnnotationsYou almost always want to include one or more Spring Boot includes a number of
Class ConditionsThe This mechanism does not apply the same way to To handle this scenario, a separate Java
Kotlin
Bean ConditionsThe When placed on a Java
Kotlin
In the preceding example, the
Property ConditionsThe Resource ConditionsThe
Web Application ConditionsThe The SpEL Expression ConditionsThe
7.9.4. Testing your Auto-configurationAn auto-configuration can be affected by many factors: user configuration (
Java
Kotlin
Each test can use the runner to represent a particular use case. For instance, the sample below invokes a user configuration ( Java
Kotlin
It is also possible to easily customize the Java
Kotlin
The runner can also be used to display the Java
Kotlin
Simulating a Web ContextIf you need to test an auto-configuration that only operates in a servlet or reactive web application context, use the Overriding the ClasspathIt is also possible to test what happens when a particular class and/or package is not present at runtime. Spring Boot ships with a
Java
Kotlin
7.9.5. Creating Your Own StarterA typical Spring Boot starter contains code to auto-configure and customize the infrastructure of a given technology, let’s call that "acme". To make it easily extensible, a number of configuration keys in a dedicated namespace can be exposed to the environment. Finally, a single "starter" dependency is provided to help users get started as easily as possible. Concretely, a custom starter can contain the following:
This separation in two modules is in no way necessary. If "acme" has several flavors, options or optional features, then it is better to separate the auto-configuration as you can clearly express the fact some features are optional. Besides, you have the ability to craft a starter that provides an opinion about those optional dependencies.
At the same time, others can rely only on the If the auto-configuration is relatively straightforward and does not have optional feature, merging the two modules in the starter is definitely an option. NamingYou should make sure to provide a proper namespace for your starter. Do not start your module names with As a rule of thumb, you should name a combined module after the starter. For example,
assume that you are creating a starter for "acme" and that you name the auto-configure module Configuration keysIf your
starter provides configuration keys, use a unique namespace for them. In particular, do not include your keys in the namespaces that Spring Boot uses (such as Make sure that configuration keys are documented by adding field javadoc for each property, as shown in the following example: Java
Kotlin
Here are some rules we follow internally to make sure descriptions are consistent:
Make sure to trigger meta-data generation so that IDE assistance is available for your keys as well. You may want to review the generated metadata ( The “autoconfigure” ModuleThe
Spring Boot uses an annotation processor to collect the conditions on auto-configurations in a metadata file ( When building with Maven, it is recommended to add the following dependency in a module that contains auto-configurations:
If you have defined auto-configurations directly in your application,
make sure to configure the
With Gradle, the dependency should be declared in the
Starter ModuleThe starter is really an empty jar. Its only purpose is to provide the necessary dependencies to work with the library. You can think of it as an opinionated view of what is required to get started. Do not make assumptions about the project in which your starter is added. If the library you are auto-configuring typically requires other starters, mention them as well. Providing a proper set of default dependencies may be hard if the number of optional dependencies is high, as you should avoid including dependencies that are unnecessary for a typical usage of the library. In other words, you should not include optional dependencies.
7.10. Kotlin SupportKotlin is a statically-typed language targeting the JVM (and other platforms) which allows writing concise and elegant code while providing interoperability with existing libraries written in Java. Spring Boot provides Kotlin support by leveraging the support in other Spring projects such as Spring Framework, Spring Data, and Reactor. See the Spring Framework Kotlin support documentation for more information. The
easiest way to start with Spring Boot and Kotlin is to follow this comprehensive tutorial. You can create new Kotlin projects by using start.spring.io. Feel free to join the #spring channel of Kotlin Slack or ask a question with the 7.10.1. RequirementsSpring Boot requires at least Kotlin 1.3.x and manages a suitable Kotlin version through dependency management. To use Kotlin, Jackson’s Kotlin module is required for serializing / deserializing JSON data in Kotlin. It is automatically registered when found on the classpath. A warning message is logged if Jackson and Kotlin are present but the Jackson Kotlin module is not.
7.10.2. Null-safetyOne of Kotlin’s key features is null-safety. It deals with Although Java does not allow one to express null-safety in its type system, Spring Framework, Spring Data, and Reactor now provide null-safety of their API through tooling-friendly annotations. By default, types from Java APIs used in Kotlin are recognized as platform types for which null-checks are relaxed. Kotlin’s support for JSR 305 annotations combined with nullability annotations provide null-safety for the related Spring API in Kotlin. The JSR 305 checks can be
configured by adding the
7.10.3. Kotlin APIrunApplicationSpring Boot provides an idiomatic way to run an application with
This is a drop-in replacement for
ExtensionsKotlin extensions provide the ability to extend existing classes with additional functionality. The Spring Boot Kotlin API makes use of these extensions to add new Kotlin specific conveniences to existing APIs.
7.10.4. Dependency managementIn order to avoid mixing different versions of Kotlin dependencies on the classpath, Spring Boot imports the Kotlin BOM. With Maven, the Kotlin version can be customized by setting the Spring Boot also manages the version of Coroutines dependencies by importing the Kotlin Coroutines BOM. The version can be customized by setting the
7.10.5. @ConfigurationProperties
7.10.6. TestingWhile it is possible to use JUnit 4 to test Kotlin code, JUnit 5 is provided by default and is recommended. JUnit 5 enables a test class to be instantiated once and reused for all of the class’s tests. This makes it possible to use To mock Kotlin classes, MockK is recommended. If you need the 7.10.7. ResourcesFurther reading
Examples
7.11. What to Read NextIf you are comfortable with Spring Boot’s core features, you can continue on and read about production-ready features. 8. WebSpring Boot is well suited for web application development. You can create a self-contained HTTP server by using embedded Tomcat, Jetty, Undertow, or Netty. Most web applications use the If you have not yet developed a Spring Boot web application, you can follow the "Hello World!" example in the Getting started section. 8.1. Servlet Web ApplicationsIf you want to build servlet-based web applications, you can take advantage of Spring Boot’s auto-configuration for Spring MVC or Jersey. 8.1.1. The “Spring Web MVC Framework”The Spring Web MVC framework (often referred to as “Spring MVC”) is a rich “model view controller” web framework. Spring MVC lets you create special The following code shows a typical Java
Kotlin
“WebMvc.fn”, the functional variant, separates the routing configuration from the actual handling of the requests, as shown in the following example: Java
Kotlin
Java
Kotlin
Spring MVC is part of the core Spring Framework, and detailed information is available in the reference documentation. There are also several guides that cover Spring MVC available at spring.io/guides.
Spring MVC Auto-configurationSpring Boot provides auto-configuration for Spring MVC that works well with most applications. The auto-configuration adds the following features on top of Spring’s defaults:
If you want to keep those Spring Boot MVC customizations and make more MVC customizations (interceptors, formatters, view controllers, and other features), you can add your own If you want to provide custom instances of If you want to take complete control of Spring MVC, you can add your own
HttpMessageConvertersSpring MVC uses the If you need to add or customize converters, you can use Spring Boot’s Java
Kotlin
Any MessageCodesResolverSpring MVC has a strategy for generating error codes for rendering error messages from binding errors: Static ContentBy default, Spring Boot serves static content from a directory called In a stand-alone web application, the default servlet from the container is not enabled. It can be enabled using the The default servlet acts as a fallback, serving content from the root of the By default, resources are mapped on Properties
Yaml
You can also customize the static resource locations by using the In addition to the “standard” static resource locations mentioned earlier, a special case is made for Webjars content. Any resources with a path in
Spring Boot also supports the advanced resource handling features provided by Spring MVC, allowing use cases such as cache-busting static resources or using version agnostic URLs for Webjars. To use version agnostic URLs for Webjars, add the
To use cache busting, the following configuration configures a cache busting solution for all static resources, effectively adding a content hash, such as Properties
Yaml
When loading resources dynamically with, for example, a JavaScript module loader, renaming files is not an option. That is why other strategies are also supported and can be combined. A "fixed" strategy adds a static version string in the URL without changing the file name, as shown in the following example: Properties
Yaml
With this configuration, JavaScript modules located under Welcome PageSpring Boot supports both static and templated welcome pages. It first looks for an Custom FaviconAs with other static resources, Spring Boot checks for a Path Matching and Content NegotiationSpring MVC can map incoming HTTP requests to handlers by looking at the request path and matching it to the mappings defined in your application (for example, Spring Boot chooses to
disable suffix pattern matching by default, which means that requests like There are other ways to deal with HTTP clients that do not consistently send proper "Accept" request headers. Instead of using suffix matching, we can use a query parameter to ensure that requests like Properties
Yaml
Or if you prefer to use a different parameter name: Properties
Yaml
Most standard media types are supported out-of-the-box, but you can also define new ones: Properties
Yaml
Suffix pattern matching is deprecated and will be removed in a future release. If you understand the caveats and would still like your application to use suffix pattern matching, the following configuration is required: Properties
Yaml
Alternatively, rather than open all suffix patterns, it is more secure to only support registered suffix patterns: Properties
Yaml
As of Spring Framework 5.3, Spring MVC supports several implementation strategies for matching request paths to Controller handlers. It was previously only supporting the Properties
Yaml
For more details on why you should consider this new implementation, see the dedicated blog post.
ConfigurableWebBindingInitializerSpring MVC uses a Template EnginesAs well as REST web services, you can also use Spring MVC to serve dynamic HTML content. Spring MVC supports a variety of templating technologies, including Thymeleaf, FreeMarker, and JSPs. Also, many other templating engines include their own Spring MVC integrations. Spring Boot includes auto-configuration support for the following templating engines:
When you use one of these templating engines with the default configuration, your templates are picked up automatically from
Error HandlingBy default, Spring Boot provides an There are a number of To replace the default behavior completely, you can implement
You can also define a class annotated with Java
Kotlin
In the preceding example, if In some cases, errors handled at the controller level are not recorded by the metrics infrastructure. Applications can ensure that such exceptions are recorded with the request metrics by setting the handled exception as a request attribute: Java
Kotlin
Custom Error Pages If you want to display a custom HTML error page for a given status code, you can add a file to an For example, to map src/ +- main/ +- java/ | + <source code> +- resources/ +- public/ +- error/ | +- 404.html +- <other public assets> To map all src/ +- main/ +- java/ | + <source code> +- resources/ +- templates/ +- error/ | +- 5xx.ftlh +- <other templates> For more complex mappings, you can also add beans that implement the Java
Kotlin
Mapping Error Pages Outside of Spring MVC For applications that do not use Spring MVC, you can use the Java
Kotlin
Java
Kotlin
Note that the default Error Handling in a WAR Deployment When deployed to a servlet container, Spring Boot uses its error page filter to forward a request with an error status to the appropriate error page. This is necessary as the servlet specification does not provide an API for registering error pages. Depending on the container that you are deploying your war file to and the technologies that your application uses, some additional configuration may be required. The error page filter can only forward the request to the correct error page if the response has not already
been committed. By default, WebSphere Application Server 8.0 and later commits the response upon successful completion of a servlet’s service method. You should disable this behavior by setting If you are using Spring Security and want to access the principal in an error page, you must configure Spring Security’s filter to be invoked on error dispatches. To do so, set the CORS SupportCross-origin resource sharing (CORS) is a W3C specification implemented by most browsers that lets you specify in a flexible way what kind of cross-domain requests are authorized, instead of using some less secure and less powerful approaches such as IFRAME or JSONP. As of version 4.2, Spring MVC supports CORS. Using controller method CORS configuration with
Java
Kotlin
8.1.2. JAX-RS and JerseyIf you prefer the JAX-RS programming model for REST endpoints, you can use one of the available implementations instead of Spring MVC. Jersey
and Apache CXF work quite well out of the box. CXF requires you to register its To get started with Jersey, include the Java
Kotlin
For more advanced customizations, you can also register an arbitrary number of beans that implement All the registered endpoints should be Java
Kotlin
Since the By default, Jersey is set up as a servlet in a 8.1.3. Embedded Servlet Container SupportFor servlet application, Spring Boot includes support for embedded Tomcat, Jetty,
and Undertow servers. Most developers use the appropriate “Starter” to obtain a fully configured instance. By default, the embedded server listens for HTTP requests on port Servlets, Filters, and ListenersWhen using an embedded servlet container, you can register servlets, filters, and all the listeners (such as Registering Servlets, Filters, and Listeners as Spring Beans Any By default, if the context contains only a single Servlet, it is mapped to If
convention-based mapping is not flexible enough, you can use the It is usually safe to leave filter beans unordered. If a specific order is required, you should annotate the
Servlet Context InitializationEmbedded servlet containers do not directly execute the servlet 3.0+ If you need to perform servlet context initialization in a Spring Boot application, you should register a bean that implements the Scanning for Servlets, Filters, and listeners When using an embedded container, automatic registration of classes annotated with
The ServletWebServerApplicationContextUnder the hood, Spring Boot uses a different type of
In an embedded container setup, the
Customizing Embedded Servlet ContainersCommon servlet container settings can be configured by using Spring Common server settings include:
Spring Boot tries as much as possible to expose common settings, but this is not always possible. For those cases, dedicated namespaces offer server-specific
customizations (see SameSite Cookies The If you want to change the For example, if you want your session cookie to have a Properties
Yaml
If you want to change the
There are a number of convenience factory and filter methods that you can use to quickly match specific cookies. For example, adding the following bean will automatically apply a Java
Kotlin
Programmatic Customization If you need to programmatically configure your embedded servlet container, you can register a Spring bean that implements the Java
Kotlin
Java
Kotlin
Customizing ConfigurableServletWebServerFactory Directly For more advanced use cases that require you to extend from Setters are provided for many configuration options. Several protected method “hooks” are also provided should you need to do something more exotic. See the source code documentation for details.
JSP LimitationsWhen running a Spring Boot application that uses an embedded servlet container (and is packaged as an executable archive), there are some limitations in the JSP support.
8.2. Reactive Web ApplicationsSpring Boot simplifies development of reactive web applications by providing auto-configuration for Spring Webflux. 8.2.1. The “Spring WebFlux Framework”Spring WebFlux is the new reactive web framework introduced in Spring Framework 5.0. Unlike Spring MVC, it does not require the servlet API, is fully asynchronous and non-blocking, and implements the Reactive Streams specification through the Reactor project. Spring WebFlux comes in two flavors: functional and annotation-based. The annotation-based one is quite close to the Spring MVC model, as shown in the following example: Java
Kotlin
“WebFlux.fn”, the functional variant, separates the routing configuration from the actual handling of the requests, as shown in the following example: Java
Kotlin
Java
Kotlin
WebFlux is part of the Spring Framework and detailed information is available in its reference documentation.
To get started, add the
Spring WebFlux Auto-configurationSpring Boot provides auto-configuration for Spring WebFlux that works well with most applications. The auto-configuration adds the following features on top of Spring’s defaults:
If you want to keep Spring Boot
WebFlux features and you want to add additional WebFlux configuration, you can add your own If you want to take complete control of Spring WebFlux, you can add your own HTTP Codecs with HttpMessageReaders and HttpMessageWritersSpring WebFlux uses the Spring Boot provides dedicated configuration properties for codecs, If you need to add or customize codecs, you can create a custom Java
Kotlin
Static ContentBy default, Spring Boot serves static content from a directory called By default, resources are mapped on Properties
Yaml
You can also customize the static resource locations by using In addition to the “standard” static resource locations listed earlier, a special case is made for Webjars content. Any resources with a path in
Welcome PageSpring Boot supports both static and templated welcome pages. It first looks for an Template EnginesAs well as REST web services, you can also use Spring WebFlux to serve dynamic HTML content. Spring WebFlux supports a variety of templating technologies, including Thymeleaf, FreeMarker, and Mustache. Spring Boot includes auto-configuration support for the following templating engines:
When you use one of these templating engines with the default configuration, your templates are picked up automatically from Error HandlingSpring Boot provides a The first step to customizing this feature often involves using the existing mechanism but replacing or augmenting the error contents. For
that, you can add a bean of type To change the error handling behavior, you can implement Java
Kotlin
For a more complete picture, you can also subclass In some cases, errors handled at the controller or handler function level are not recorded by the metrics infrastructure. Applications can ensure that such exceptions are recorded with the request metrics by setting the handled exception as a request attribute: Java
Kotlin
Custom Error Pages If you want to display a custom HTML error page for a given status code, you can add a file to an For example, to map
To map all
Web FiltersSpring WebFlux provides a Where the order of the filters is important they can implement
8.2.2. Embedded Reactive Server SupportSpring Boot includes support for the following embedded reactive web servers: Reactor Netty, Tomcat, Jetty, and Undertow. Most developers use the appropriate “Starter” to obtain a fully configured instance. By default, the embedded server listens for HTTP requests on port 8080. 8.2.3. Reactive Server Resources ConfigurationWhen auto-configuring a Reactor Netty or Jetty server, Spring Boot will create specific beans that will provide HTTP resources to the server instance: By default, those resources will be also shared with the Reactor Netty and Jetty clients for optimal performances, given:
Developers can override the resource configuration for Jetty and Reactor Netty by providing a custom You can learn more about the resource configuration on the client side in the WebClient Runtime section. 8.3. Graceful ShutdownGraceful shutdown is supported with all four embedded web
servers (Jetty, Reactor Netty, Tomcat, and Undertow) and with both reactive and servlet-based web applications. It occurs as part of closing the application context and is performed in the earliest phase of stopping
To enable graceful shutdown, configure the Yaml
To configure the timeout period, configure the Properties
Yaml
8.4. Spring SecurityIf Spring Security is on the classpath, then web applications are secured by default. Spring Boot relies on Spring Security’s content-negotiation strategy to determine whether to use The default Using generated security password: 78fa095d-3f4c-48b1-ad50-e24c31d5cf35 This generated password is for development use only. Your security configuration must be updated before running your application in production.
You can change the username and password by providing a The basic features you get by default in a web application are:
You can provide a different 8.4.1. MVC SecurityThe default security configuration is implemented in To also switch off the Access rules can be overridden by adding a custom 8.4.2. WebFlux SecuritySimilar to Spring MVC applications, you can secure your WebFlux applications by adding the To also switch off the Access rules and the use of multiple Spring
Security components such as OAuth 2 Client and Resource Server can be configured by adding a custom
For example, you can customize your security configuration by adding something like: Java
Kotlin
8.4.3. OAuth2OAuth2 is a widely used authorization framework that is supported by Spring. ClientIf you have You can register multiple OAuth2 clients and providers under the Properties
Yaml
For OpenID Connect providers that support OpenID Connect discovery, the configuration can be further simplified. The provider needs to be configured with an Properties
Yaml
By default, Spring Security’s Java
Kotlin
OAuth2 Client Registration for Common Providers For common OAuth2 and OpenID providers, including Google, Github, Facebook, and Okta, we provide a set of provider defaults ( If you do not need to
customize these providers, you can set the In other words, the two configurations in the following example use the Google provider: Properties
Yaml
Resource ServerIf you have Properties
Yaml
Properties
Yaml
The same properties are applicable for both servlet and reactive applications. Alternatively, you can define your own In cases where opaque tokens are used instead of JWTs, you can configure the following properties to validate tokens through introspection: Properties
Yaml
Again, the same properties are applicable for both servlet and reactive applications. Alternatively, you can define your own Currently, Spring Security does not provide support for implementing an OAuth 2.0 Authorization Server. However, this functionality is available from the Spring Security OAuth project, which will eventually be superseded by
Spring Security completely. Until then, you can use the 8.4.4. SAML 2.0Relying PartyIf you have A relying party registration represents a paired configuration between an Identity Provider, IDP, and a Service Provider, SP. You can register multiple relying parties
under the Properties
Yaml
For SAML2 logout, by default, Spring Security’s
8.5. Spring SessionSpring Boot provides Spring Session auto-configuration for a wide range of data stores. When building a servlet web application, the following stores can be auto-configured:
The servlet auto-configuration replaces the need to use When building a reactive web application, the following stores can be auto-configured:
The reactive auto-configuration replaces the need to use If a single Spring
Session module is present on the classpath, Spring Boot uses that store implementation automatically. If you have more than one implementation, you must choose the Properties
Yaml
Each store has specific additional settings. For instance, it is possible to customize the name of the table for the JDBC store, as shown in the following example: Properties
Yaml
For setting the timeout of the session you can use the You can take
control over Spring Session’s configuration using 8.6. Spring for GraphQLIf you want to build GraphQL applications,
you can take advantage of Spring Boot’s auto-configuration for Spring for GraphQL. The Spring for GraphQL project is based on GraphQL Java. You’ll need the
8.6.1. GraphQL SchemaA Spring GraphQL application requires a defined schema at startup. By default, you can write ".graphqls" or ".gqls" schema files under
In the following sections, we’ll consider this sample GraphQL schema, defining two types and two queries:
8.6.2. GraphQL RuntimeWiringThe GraphQL Java Typically, however, applications will not implement Java
Kotlin
8.6.3. Querydsl and QueryByExample Repositories SupportSpring Data repositories annotated with
are detected by Spring Boot and considered as candidates for 8.6.4. TransportsHTTP and WebSocketThe GraphQL HTTP endpoint is at HTTP POST "/graphql" by default. The path can be customized with
The GraphQL WebSocket endpoint is off by default. To enable it:
Spring GraphQL provides a Web Interception
model. This is quite useful for retrieving information from an HTTP request header and set it in the GraphQL context or fetching information from the same context and writing it to a response header. With Spring Boot, you can declare a Spring MVC and Spring WebFlux support CORS (Cross-Origin Resource Sharing) requests. CORS is a critical part of the web config for GraphQL applications that are accessed from browsers using different domains. Spring Boot supports many configuration properties under the Properties
Yaml
RSocketRSocket is also supported as a transport, on top of WebSocket or TCP. Once the RSocket server is configured, we can
configure our GraphQL handler on a particular route using Spring Boot auto-configures a Java
Kotlin
And then send a request: Java
Kotlin
8.6.5. Exceptions HandlingSpring GraphQL enables applications to register one or more Spring 8.6.6. GraphiQL and Schema printerSpring GraphQL offers infrastructure for helping developers when consuming or developing a GraphQL API. Spring GraphQL ships with a default GraphiQL page that is exposed at You can also choose to expose the GraphQL schema in text format at 8.7. Spring HATEOASIf you develop a RESTful API that makes
use of hypermedia, Spring Boot provides auto-configuration for Spring HATEOAS that works well with most applications. The auto-configuration replaces the need to use You can take control of Spring HATEOAS’s configuration by using
8.8. What to Read NextYou should now have a good understanding of how to develop web applications with Spring Boot. The next few sections describe how Spring Boot integrates with various data technologies, messaging systems, and other IO capabilities. You can pick any of these based on your application’s needs. 9. DataSpring Boot integrates with a number of data technologies, both SQL and NoSQL. 9.1. SQL DatabasesThe Spring Framework provides extensive support for working with SQL databases, from direct JDBC access using 9.1.1. Configure a DataSourceJava’s
Embedded Database SupportIt is often convenient to develop applications by using an in-memory embedded database. Obviously, in-memory databases do not provide persistent storage. You need to populate your database when your application starts and be prepared to throw away data when your application ends. Spring Boot can auto-configure embedded H2, HSQL, and Derby databases. You need not provide any connection URLs. You need only include a build dependency to the embedded database that you want to use. If there are multiple embedded databases on the classpath, set the
For example, the typical POM dependencies would be as follows:
Connection to a Production DatabaseProduction database connections can also be auto-configured by using a pooling DataSource ConfigurationDataSource configuration is controlled by external configuration properties in Properties
Yaml
See For instance, if you use the Tomcat connection pool, you could customize many additional settings, as shown in the following example: Properties
Yaml
This will set the pool to wait 10000ms before throwing an exception if no connection is available, limit the maximum number of connections to 50 and validate the connection before borrowing it from the pool. Supported Connection PoolsSpring Boot uses the following algorithm for choosing a specific implementation:
You can bypass that algorithm completely and specify the connection pool to use by setting the Additional connection pools can always be configured manually, using
Connection to a JNDI DataSourceIf you deploy your Spring Boot application to an Application Server, you might want to configure and manage your DataSource by using your Application Server’s built-in features and access it by using JNDI. The Properties
Yaml
9.1.2. Using JdbcTemplateSpring’s Java
Kotlin
You can customize some
properties of the template by using the Properties
Yaml
9.1.3. JPA and Spring Data JPAThe Java Persistence API is a standard technology that lets you “map” objects to relational databases. The
Entity ClassesTraditionally, JPA “Entity” classes are specified in a Any classes annotated with Java
Kotlin
Spring Data JPA RepositoriesSpring Data JPA repositories are interfaces that you can define to access data. JPA queries are created automatically from your method names. For example, a For more complex queries, you can annotate your method with Spring Data’s Spring Data repositories usually extend from the The following example shows a typical Spring Data repository interface definition: Java
Kotlin
Spring Data JPA repositories support three different modes of bootstrapping: default, deferred, and lazy. To enable deferred or lazy bootstrapping, set the
Spring Data Envers RepositoriesIf Spring Data Envers is available, JPA repositories are auto-configured to support typical Envers queries. To use Spring Data Envers,
make sure your repository extends from Java
Kotlin
Creating and Dropping JPA DatabasesBy default, JPA databases are automatically created
only if you use an embedded database (H2, HSQL, or Derby). You can explicitly configure JPA settings by using Properties
Yaml
Properties
Yaml
The line in the preceding example passes a value of By default, the DDL execution (or validation) is deferred until the Open EntityManager in ViewIf you are running a web application, Spring Boot by default registers
9.1.4. Spring Data JDBCSpring Data includes repository support for JDBC and will automatically generate SQL for the methods on Spring Boot will auto-configure Spring Data’s JDBC repositories when the necessary dependencies are on the classpath. They can be added to your project with a single dependency
on 9.1.5. Using H2’s Web ConsoleThe H2 database provides a browser-based console that Spring Boot can auto-configure for you. The console is auto-configured when the following conditions are met:
Changing the H2 Console’s PathBy default, the console is available at Accessing the H2 Console in a Secured ApplicationH2 Console uses frames and, as it is intended for development only, does not implement CSRF protection measures. If your application uses Spring Security, you need to configure it to
More information on CSRF and the header X-Frame-Options can be found in the Spring Security Reference Guide. In simple setups, a Java
Kotlin
9.1.6. Using jOOQjOOQ Object Oriented Querying (jOOQ) is a popular product from Data Geekery which generates Java code from your database and lets you build type-safe SQL queries through its fluent API. Both the commercial and open source editions can be used with Spring Boot. Code GenerationIn order to use jOOQ type-safe queries, you need to generate Java classes from your database schema. You can follow the instructions in the
jOOQ user manual. If you use the
Using DSLContextThe fluent API offered by jOOQ is initiated through the Java
Kotlin
You can then use the Java
Kotlin
jOOQ SQL DialectUnless the
Customizing jOOQMore advanced customizations can be achieved by defining your own You can also create your own 9.1.7. Using R2DBCThe Reactive Relational Database Connectivity (R2DBC) project brings reactive programming APIs to relational databases. R2DBC’s
Properties
Yaml
To customize the connections created by a Java
Kotlin
The following examples show how to set some PostgreSQL connection options: Java
Kotlin
When a Embedded Database SupportSimilarly to the JDBC support, Spring Boot can automatically configure an embedded database for reactive usage. You need not provide any connection URLs. You need only include a build dependency to the embedded database that you want to use, as shown in the following example:
Using DatabaseClientA Java
Kotlin
Spring Data R2DBC RepositoriesSpring Data R2DBC repositories are interfaces that you can define to access data. Queries are created automatically from your method names. For example, a For more complex queries, you can annotate your method with Spring Data’s Spring Data repositories usually extend from the The following example shows a typical Spring Data repository interface definition: Java
Kotlin
9.2. Working with NoSQL TechnologiesSpring Data provides additional projects that help you access a variety of NoSQL technologies, including:
Spring Boot provides auto-configuration for Redis, MongoDB, Neo4j, Solr, Elasticsearch, Cassandra, Couchbase, LDAP and InfluxDB. Additionally, Spring Boot for Apache Geode provides auto-configuration for Apache Geode. You can make use of the other projects, but you must configure them yourself. See the appropriate reference documentation at spring.io/projects/spring-data. 9.2.1. RedisRedis is a cache, message broker, and richly-featured key-value store. Spring Boot offers basic auto-configuration for the Lettuce and Jedis client libraries and the abstractions on top of them provided by Spring Data Redis. There is a
Connecting to RedisYou can inject an auto-configured Java
Kotlin
If you add your own By default, a pooled connection factory is auto-configured if 9.2.2. MongoDBMongoDB is an open-source NoSQL document database that uses a JSON-like schema instead of traditional table-based relational data. Spring Boot offers several conveniences for working with MongoDB, including the Connecting to a MongoDB DatabaseTo access MongoDB databases, you can inject an auto-configured Java
Kotlin
If you have defined your own
The auto-configured You can set the Properties
Yaml
Alternatively, you can specify connection details using discrete properties. For example, you might declare the following settings in your Properties
Yaml
MongoTemplateSpring Data MongoDB provides a Java
Kotlin
Spring Data MongoDB RepositoriesSpring Data includes repository support for MongoDB. As with the JPA repositories discussed earlier, the basic principle is that queries are constructed automatically, based on method names. In fact, both Spring Data JPA and Spring Data MongoDB share the same common infrastructure. You could take the JPA example from earlier and, assuming that Java
Kotlin
Embedded MongoSpring Boot offers auto-configuration for Embedded Mongo. To use it in your Spring Boot application, add a dependency on
The port that Mongo listens on can be configured by setting the
If you have SLF4J on the classpath, the output produced by Mongo is automatically routed to a logger named You can declare your own 9.2.3. Neo4jNeo4j is an open-source NoSQL graph database that uses a rich data model of nodes connected by first class relationships, which is better suited for connected big data than traditional RDBMS approaches. Spring Boot offers several conveniences for working with Neo4j, including the Connecting to a Neo4j DatabaseTo access a Neo4j server, you can inject an auto-configured Java
Kotlin
You can configure various aspects of the driver using Properties
Yaml
The auto-configured Spring Data Neo4j RepositoriesSpring Data includes repository support for Neo4j. For complete details of Spring Data Neo4j, see the reference documentation. Spring Data Neo4j shares
the common infrastructure with Spring Data JPA as many other Spring Data modules do. You could take the JPA example from earlier and define Java
Kotlin
The You can customize the locations to look for repositories and entities by using
9.2.4. SolrApache Solr is a search engine. Spring Boot offers basic auto-configuration for the Solr 5 client library. Connecting to SolrYou can inject an auto-configured Java
Kotlin
If you add your own 9.2.5. ElasticsearchElasticsearch is an open source, distributed, RESTful search and analytics engine. Spring Boot offers basic auto-configuration for Elasticsearch clients. Spring Boot supports several clients:
Spring Boot provides a dedicated “Starter”, Connecting to Elasticsearch Using REST clientsElasticsearch
ships two different REST clients that you can use to query a cluster: the low-level client from the Properties
Yaml
Connecting to Elasticsearch Using RestClient If you have Additionally, if Properties
Yaml
Connecting to Elasticsearch Using ReactiveElasticsearchClient Spring Data Elasticsearch ships By default, Spring Boot will auto-configure and register a Properties
Yaml
If the Connecting to Elasticsearch by Using Spring DataTo connect to Elasticsearch, a Java
Kotlin
In the presence of Spring Data Elasticsearch RepositoriesSpring Data includes repository support for Elasticsearch. As with the JPA repositories discussed earlier, the basic principle is that queries are constructed for you automatically based on method names. In fact, both Spring Data JPA and Spring Data Elasticsearch share the same common infrastructure. You could take the JPA example from earlier and, assuming that Spring
Boot supports both classic and reactive Elasticsearch repositories, using the If you wish to use your own template for backing the Elasticsearch repositories, you can add your own You can choose to disable the repositories support with the following property: Properties
Yaml
9.2.6. CassandraCassandra is an open source, distributed database management system designed to handle large amounts of data across many
commodity servers. Spring Boot offers auto-configuration for Cassandra and the abstractions on top of it provided by Spring Data Cassandra. There is a Connecting to CassandraYou can inject an auto-configured Properties
Yaml
If the port is the same for all your contact points you can use a shortcut and only specify the host names, as shown in the following example: Properties
Yaml
The following code listing shows how to inject a Cassandra bean: Java
Kotlin
If you add your own Spring Data Cassandra RepositoriesSpring Data
includes basic repository support for Cassandra. Currently, this is more limited than the JPA repositories discussed earlier and needs to annotate finder methods with 9.2.7. CouchbaseCouchbase is an open-source, distributed, multi-model NoSQL
document-oriented database that is optimized for interactive applications. Spring Boot offers auto-configuration for Couchbase and the abstractions on top of it provided by Spring Data Couchbase. There are Connecting to CouchbaseYou can get a Properties
Yaml
It is also possible to customize some of the Properties
Yaml
Spring Data Couchbase RepositoriesSpring Data includes repository support for Couchbase. For complete details of Spring Data Couchbase, see the reference documentation. You
can inject an auto-configured Properties
Yaml
The following examples shows how to inject a Java
Kotlin
There are a few beans that you can define in your own configuration to override those provided by the auto-configuration:
To avoid hard-coding those names in your own config, you can reuse Java
Kotlin
9.2.8. LDAPLDAP (Lightweight Directory Access Protocol) is an open, vendor-neutral, industry standard application protocol for accessing and maintaining distributed directory information services over an IP network. Spring Boot offers auto-configuration for any compliant LDAP server as well as support for the embedded in-memory LDAP server from UnboundID. LDAP abstractions are provided by Spring Data LDAP. There is a Connecting to an LDAP ServerTo connect to an LDAP server, make sure you declare a dependency on the Properties
Yaml
If
you need to customize connection settings, you can use the An Spring Data LDAP RepositoriesSpring Data includes repository support for LDAP. For complete details of Spring Data LDAP, see the reference documentation. You can also inject an auto-configured Java
Kotlin
Embedded In-memory LDAP ServerFor testing purposes, Spring Boot supports auto-configuration of an in-memory LDAP server from
UnboundID. To configure the server, add a dependency to Properties
Yaml
By default, the server starts on a random port and triggers the regular LDAP support. There is no need to specify a If there is a By default, a standard schema is used to validate 9.2.9. InfluxDBInfluxDB is an open-source time series database optimized for fast, high-availability storage and retrieval of time series data in fields such as operations monitoring, application metrics, Internet-of-Things sensor data, and real-time analytics. Connecting to InfluxDBSpring Boot auto-configures an Properties
Yaml
If the connection to InfluxDB requires a user and password, you can set the InfluxDB relies on OkHttp. If you need to tune the http client If you need more control over the configuration, consider registering an 9.3. What to Read NextYou should now have a feeling for how to use Spring Boot with various data technologies. From here, you can read about Spring Boot’s support for various messaging technologies and how to enable them in your application.
10. MessagingThe Spring Framework provides extensive support for integrating with messaging systems, from simplified use of the JMS API using 10.1. JMSThe 10.1.1. ActiveMQ SupportWhen ActiveMQ is available on the classpath, Spring Boot can also configure a
ActiveMQ configuration is controlled by external configuration properties in By default, ActiveMQ is auto-configured to use the VM transport, which starts a broker embedded in the same JVM instance. You can disable the embedded broker by configuring the Properties
Yaml
The embedded broker will also be disabled if you configure the broker URL, as shown in the following example: Properties
Yaml
If you want to take full control over the embedded broker, see the ActiveMQ documentation for further information. By default, a Properties
Yaml
If you’d rather use native pooling, you can do so by adding a dependency to Properties
Yaml
By default, ActiveMQ creates a destination if it does not yet exist so that destinations are resolved against their provided names. 10.1.2. ActiveMQ Artemis SupportSpring Boot can auto-configure a
ActiveMQ Artemis configuration is controlled by external configuration properties in Properties
Yaml
When embedding the broker, you can choose if you want to enable persistence and list the destinations that should be made available. These can be specified as a comma-separated list to create them with the default
options, or you can define bean(s) of type By default, a Properties
Yaml
If you’d rather use native pooling, you can do so by adding a dependency to Properties
Yaml
No JNDI lookup is involved, and destinations are resolved against their names, using either the 10.1.3. Using a JNDI ConnectionFactoryIf you are running your application in an application server, Spring Boot tries to locate a JMS Properties
Yaml
10.1.4. Sending a MessageSpring’s Java
Kotlin
10.1.5. Receiving a MessageWhen the JMS infrastructure is present, any bean can be annotated with By default, the default factory is transactional. If you run in an infrastructure where a The following component creates a listener endpoint on the Java
Kotlin
If you need to create more For instance, the following example exposes another factory that uses a specific Java
Kotlin
Then you can use the factory in any Java
Kotlin
10.2. AMQPThe Advanced Message Queuing Protocol (AMQP) is a platform-neutral, wire-level protocol for message-oriented middleware. The Spring AMQP project applies core Spring concepts to the development of AMQP-based messaging solutions. Spring Boot offers several conveniences for working with AMQP through RabbitMQ, including the 10.2.1. RabbitMQ SupportRabbitMQ is a lightweight, reliable, scalable, and portable message broker based on the AMQP protocol. Spring uses RabbitMQ configuration is controlled by external configuration properties in Properties
Yaml
Alternatively, you could configure the same connection using the
See If a 10.2.2. Sending a MessageSpring’s Java
Kotlin
If necessary, any To retry operations, you can enable retries on the Properties
Yaml
Retries are disabled by default. You can also customize the If you need to create more 10.2.3. Sending a Message To A StreamTo send a message to a particular stream, specify the name of the stream, as shown in the following example: Properties
Yaml
If a If you need to create more 10.2.4. Receiving a MessageWhen the Rabbit infrastructure is present, any bean can be annotated with The following sample component creates a listener endpoint on the Java
Kotlin
If you need to create more
For instance, the following configuration class exposes another factory that uses a specific Java
Kotlin
Then you can use the factory in any Java
Kotlin
You can enable retries to handle situations where your listener throws an exception. By default,
10.3. Apache Kafka SupportApache Kafka is supported by providing auto-configuration of the Kafka configuration is controlled by external configuration properties in Properties
Yaml
10.3.1. Sending a MessageSpring’s Java
Kotlin
10.3.2. Receiving a MessageWhen the Apache Kafka infrastructure is present, any bean can be annotated with The following component creates a
listener endpoint on the Java
Kotlin
If a Depending on the listener type, a
10.3.3. Kafka StreamsSpring for Apache Kafka provides a factory bean to create a Enabling
Kafka Streams means that the application id and bootstrap servers must be set. The former can be configured using Several additional properties are available using dedicated properties; other arbitrary Kafka properties can be set using the To use the factory bean, wire Java
Kotlin
By default, the streams managed by the 10.3.4. Additional Kafka PropertiesThe properties supported by auto configuration are shown in the “Integration Properties” section of the Appendix. Note that, for the most part, these properties (hyphenated or camelCase) map directly to the Apache Kafka dotted properties. See the Apache Kafka documentation for details. The first few of these properties apply to all components (producers, consumers, admins, and streams) but can be specified at the component level if you wish to use different values. Apache Kafka designates properties with an importance of HIGH, MEDIUM, or LOW. Spring Boot auto-configuration supports all HIGH importance properties, some selected MEDIUM and LOW properties, and any properties that do not have a default value. Only a subset of the properties supported by Kafka are available directly through the Properties
Yaml
This
sets the common You can also configure the Spring Kafka Properties
Yaml
Similarly, you can disable the Properties
Yaml
10.3.5. Testing with Embedded KafkaSpring for Apache Kafka provides a convenient way to test projects with an embedded Apache Kafka broker. To use this feature, annotate a test class with To make Spring Boot auto-configuration work with the aforementioned embedded Apache Kafka broker, you need to remap a system property for embedded broker addresses (populated by the
Java
Kotlin
Java
Kotlin
Properties
Yaml
10.4. RSocketRSocket is a binary protocol for use on byte stream transports. It enables symmetric interaction models through async message passing over a single connection. The 10.4.1. RSocket Strategies Auto-configurationSpring Boot auto-configures an
The Developers can customize the 10.4.2. RSocket server Auto-configurationSpring Boot provides RSocket server auto-configuration. The required dependencies are provided by the Spring Boot allows exposing RSocket over WebSocket from a WebFlux server, or standing up an independent RSocket server. This depends on the type of application and its configuration. For
WebFlux application (that is of type Properties
Yaml
Alternatively, an RSocket TCP or websocket server is started as an independent, embedded server. Besides the dependency requirements, the only required configuration is to define a port for that server: Properties
Yaml
10.4.3. Spring Messaging RSocket supportSpring Boot will auto-configure the Spring Messaging infrastructure for RSocket. This means that Spring Boot will create a 10.4.4. Calling RSocket Services with RSocketRequesterOnce the As a server, you can get injected with an The The following code shows a typical example: Java
Kotlin
10.5. Spring IntegrationSpring Boot offers several conveniences for working with Spring Integration, including the Spring Integration polling logic relies on the auto-configured Spring Boot also
configures some features that are triggered by the presence of additional Spring Integration modules. If Properties
Yaml
If Spring Boot can also auto-configure an Properties
Yaml
Properties
Yaml
10.6. WebSocketsSpring Boot provides WebSockets auto-configuration for embedded Tomcat, Jetty, and Undertow. If you deploy a war file to a standalone container, Spring Boot assumes that the container is responsible for the configuration of its WebSocket support. Spring Framework provides
rich WebSocket support for MVC web applications that can be easily accessed through the WebSocket support is also available for reactive web applications and requires to include the WebSocket API alongside
10.7. What to Read Next11. IOMost applications will need to deal with input and output concerns at some point. Spring Boot provides utilities and integrations with a range of technologies to help when you need IO capabilities. This section covers standard IO features such as caching and validation as well as more advanced topics such as scheduling and distributed transactions. We will also cover calling remote REST or SOAP services and sending email. 11.1. CachingThe Spring Framework provides support for transparently
adding caching to an application. At its core, the abstraction applies caching to methods, thus reducing the number of executions based on the information available in the cache. The caching logic is applied transparently, without any interference to the invoker. Spring Boot auto-configures the cache infrastructure as long as caching support is enabled by using the In a nutshell, to add caching to an operation of your service add the relevant annotation to its method, as shown in the following example: Java
Kotlin
This example demonstrates the use of caching on a potentially costly operation. Before invoking
If you do not add any specific cache library, Spring Boot auto-configures a simple provider that uses concurrent maps in memory. When a cache is required (such as
11.1.1. Supported Cache ProvidersThe cache abstraction does not provide an actual store and relies on abstraction materialized by the If you have not defined a bean of type
If the Java
Kotlin
GenericGeneric caching is used if the context defines at least one JCache (JSR-107)JCache is bootstrapped through the presence of a It might happen that more than one provider is present, in which case the provider must be explicitly specified. Even if the JSR-107 standard does not enforce a standardized way to define the location of the configuration file, Spring Boot does its best to accommodate setting a cache with implementation details, as shown in the following example: Properties
Yaml
There are two ways to customize the underlying
EhCache 2.xEhCache 2.x is used if a file named Properties
Yaml
HazelcastSpring Boot has general
support for Hazelcast. If a InfinispanInfinispan has no default configuration file location, so it must be specified explicitly. Otherwise, the default bootstrap is used. Properties
Yaml
Caches can be created on startup by setting the
CouchbaseIf Spring Data Couchbase is available and Couchbase is configured, a Properties
Yaml
If you need more control over the configuration, consider registering a Java
Kotlin
RedisIf Redis is available and configured, a Properties
Yaml
If you need more control over the configuration, consider registering a Java
Kotlin
CaffeineCaffeine is a Java 8 rewrite of Guava’s cache that supersedes support for Guava. If Caffeine is present, a
For instance, the following configuration creates Properties
Yaml
If a Cache2kCache2k is an in-memory cache. If the Cache2k spring integration is present, a Caches can be created on startup by
setting the Java
Kotlin
SimpleIf none
of the other providers can be found, a simple implementation using a Properties
Yaml
If you do so and your application uses a cache not listed, then it fails at runtime when the cache is needed, but not on startup. This is similar to the way the "real" cache providers behave if you use an undeclared cache. NoneWhen Yaml
11.2. HazelcastIf Hazelcast is on the classpath and a
suitable configuration is found, Spring Boot auto-configures a Spring Boot first attempts to create a client by checking the following configuration options:
If a client can not be created, Spring Boot attempts to configure an embedded server. If you define a You could also specify the Hazelcast configuration file to use through configuration, as shown in the following example: Properties
Yaml
Otherwise, Spring Boot tries to find the Hazelcast configuration from the default locations:
11.3. Quartz SchedulerSpring Boot offers several conveniences for working with the Quartz scheduler, including the Beans of the following types are
automatically picked up and associated with the
By default, an in-memory Properties
Yaml
When the JDBC store is used, the schema can be initialized on startup, as shown in the following example: Properties
Yaml
To have Quartz use a By default, jobs created by configuration will not
overwrite already registered jobs that have been read from a persistent job store. To enable overwriting existing job definitions set the Quartz Scheduler configuration can be customized using
Jobs can define setters to inject data map properties. Regular beans can also be injected in a similar manner, as shown in the following example: Java
Kotlin
11.4. Sending EmailThe Spring Framework provides an abstraction for sending email by using the
If In particular, certain default timeout values are infinite, and you may want to change that to avoid having a thread blocked by an unresponsive mail server, as shown in the following example: Properties
Yaml
It is also possible to configure a Properties
Yaml
When a 11.5. ValidationThe method validation feature supported
by Bean Validation 1.1 is automatically enabled as long as a JSR-303 implementation (such as Hibernate validator) is on the classpath. This lets bean methods be annotated with For instance, the following service triggers the validation of the first argument, making sure its size is between 8 and 10: Java
Kotlin
The application’s 11.6. Calling REST ServicesIf your application calls remote REST services, Spring Boot makes that very convenient using a 11.6.1. RestTemplateIf you need to call remote REST services from your application, you can use the Spring Framework’s The following code shows a typical example: Java
Kotlin
RestTemplate CustomizationThere are three main approaches to To make the scope of any customizations as narrow as possible, inject the auto-configured To make an application-wide, additive customization, use a The following example shows a customizer that configures the use of a proxy for all hosts except Java
Kotlin
Finally, you can define your own Java
Kotlin
The most extreme (and rarely used) option is to create your own 11.6.2. WebClientIf you have Spring
WebFlux on your classpath, you can also choose to use Spring Boot creates and pre-configures a The following code shows a typical example: Java
Kotlin
WebClient RuntimeSpring Boot will auto-detect which The Developers can override the resource configuration for Jetty and Reactor Netty by providing a custom If you wish to override that choice for the client, you can define your own WebClient CustomizationThere are three main approaches to To make the scope of any customizations as narrow as possible, inject the auto-configured To make an application-wide, additive customization to all Finally, you can fall back to the original API and use 11.7. Web ServicesSpring Boot provides Web Services auto-configuration so that all you must do is define your
Properties
Yaml
11.7.1. Calling Web Services with WebServiceTemplateIf you need to call remote Web services from your application, you can use the The following code shows a typical example: Java
Kotlin
By default, Java
Kotlin
11.8. Distributed Transactions With JTASpring Boot supports distributed JTA transactions across multiple XA resources by using an Atomikos embedded transaction manager. JTA transactions are also supported when deploying to a suitable Java EE Application Server. When a JTA environment is detected, Spring’s 11.8.1. Using an Atomikos Transaction ManagerAtomikos is a popular open source transaction manager which can be embedded into your Spring Boot application. You can use the By default, Atomikos transaction logs are written to a
11.8.2. Using a Java EE Managed Transaction ManagerIf you package your Spring Boot application as a 11.8.3. Mixing XA and Non-XA JMS ConnectionsWhen using JTA, the primary JMS Java
In
some situations, you might want to process certain JMS messages by using a non-XA If you want to use a non-XA Java
For consistency, the Java
11.8.4. Supporting an Alternative Embedded Transaction ManagerThe 11.9. What to Read NextYou should now have a good understanding of Spring Boot’s core features and the various technologies that Spring Boot provides support for via auto-configuration. 12. Container Images12.1. Efficient Container ImagesIt is easily possible to package a Spring Boot fat jar as a docker image. However, there are various downsides to copying and running the fat jar as is in the docker image. There’s always a certain amount of overhead when running a fat jar without unpacking it, and in a containerized environment this can be noticeable. The other issue is that putting your application’s code and all its dependencies in one layer in the Docker image is sub-optimal. Since you probably recompile your code more often than you upgrade the version of Spring Boot you use, it’s often better to separate things a bit more. If you put jar files in the layer before your application classes, Docker often only needs to change the very bottom layer and can pick others up from its cache. 12.1.1. Unpacking the Executable JARIf you are running your application from a container, you can use an executable jar, but it is also often an advantage to explode it and run it in a different way. Certain PaaS implementations may also choose to unpack archives before they run. For example, Cloud Foundry operates this way. One way to run an unpacked archive is by starting the appropriate launcher, as follows:
This is actually slightly faster on startup (depending on the size of the jar) than running from an unexploded archive. At runtime you should not expect any differences. Once you have unpacked the jar file, you can also get an extra boost to startup time by running the app with its "natural" main method instead of the
12.1.2. Layering Docker ImagesTo make it easier to create optimized Docker images, Spring Boot supports adding a layer index file to the jar. It provides a list of layers and the parts of the jar that should be contained within them. The list of layers in the index is ordered based on the order in which the layers should be added to the Docker/OCI image. Out-of-the-box, the following layers are supported:
The following shows an example of a
This layering is designed to separate code based on how likely it is to change between application builds. Library code is less likely to change between builds, so it is placed in its own layers to allow tooling to re-use the layers from cache. Application code is more likely to change between builds so it is isolated in a separate layer. Spring Boot also supports layering for war files with the help of a 12.2. DockerfilesWhile it is possible to convert a Spring Boot fat jar into a docker image with just a few lines in the Dockerfile, we will use the layering feature to create an optimized docker image. When you create a jar
containing the layers index file, the
Here’s how you can launch your jar with a
This will provide the following output: Usage: java -Djarmode=layertools -jar my-app.jar Available commands: list List layers from the jar that can be extracted extract Extracts layers from the jar for image creation help Help about any command The
Assuming the above
This is a multi-stage dockerfile. The builder stage extracts the directories that are needed later. Each of the Of course, a Dockerfile can be written without using the jarmode. You can use some combination of 12.3. Cloud Native BuildpacksDockerfiles are just one way to build docker images. Another way to build docker images is directly from your Maven or Gradle plugin, using buildpacks. If you’ve ever used an application platform such as Cloud Foundry or Heroku then you’ve probably used a buildpack. Buildpacks are the part of the platform that takes your application and
converts it into something that the platform can actually run. For example, Cloud Foundry’s Java buildpack will notice that you’re pushing a With Cloud Native Buildpacks, you can create Docker compatible images that you can run anywhere. Spring Boot includes buildpack support directly for both Maven and Gradle. This means you can just type a single command and quickly get a sensible image into your locally running Docker daemon. See the individual plugin documentation on how to use buildpacks with Maven and Gradle.
12.4. What to Read Next13. Production-ready FeaturesSpring Boot includes a number of additional features to help you monitor and manage your application when you push it to production. You can choose to manage and monitor your application by using HTTP endpoints or with JMX. Auditing, health, and metrics gathering can also be automatically applied to your application. 13.1. Enabling Production-ready FeaturesThe
To add the actuator to a Maven-based project, add the following ‘Starter’ dependency:
For Gradle, use the following declaration:
13.2. EndpointsActuator endpoints let you monitor and interact with your application. Spring Boot includes a number of built-in endpoints and lets you add your own. For example, the You can
enable or disable each individual endpoint and expose them (make them remotely accessible) over HTTP or JMX. An endpoint is considered to be available when it is both enabled and exposed. The built-in endpoints are auto-configured only when
they are available. Most applications choose exposure over HTTP, where the ID of the endpoint and a prefix of
The following technology-agnostic endpoints are available:
If your application is a web application (Spring MVC, Spring WebFlux, or Jersey), you can use the following additional endpoints:
13.2.1. Enabling EndpointsBy default, all endpoints except for Properties
Yaml
If you prefer endpoint enablement to be opt-in rather than opt-out, set the Properties
Yaml
13.2.2. Exposing EndpointsSince Endpoints may contain sensitive information, you should carefully consider when to expose them. The following table shows the default exposure for the built-in endpoints:
To change which endpoints are exposed, use the following technology-specific
The For example, to stop exposing all endpoints over JMX and only expose the Properties
Yaml
Properties
Yaml
13.2.3. SecurityFor security purposes, only the
If Spring Security is on the classpath and no other If you wish to configure custom security for HTTP endpoints (for example, to allow only users with a certain role to access them), Spring Boot provides
some convenient A typical Spring Security configuration might look something like the following example: Java
Kotlin
The preceding example uses If you deploy applications behind a firewall, you may prefer that all your actuator endpoints can be accessed without requiring authentication. You can do so by changing the Properties
Yaml
Additionally, if Spring Security is present, you would need to add custom security configuration that allows unauthenticated access to the endpoints, as the following example shows: Java
Kotlin
Cross Site Request Forgery ProtectionSince Spring Boot relies on Spring Security’s defaults, CSRF protection is turned on by default. This means that the actuator endpoints that require a
13.2.4. Configuring EndpointsEndpoints automatically cache responses to read operations that do not take any parameters. To configure the amount of time for which an endpoint caches a response, use its Properties
Yaml
13.2.5. Hypermedia for Actuator Web EndpointsA “discovery page” is added with links to all the endpoints. The “discovery page” is available on To disable the “discovery page”, add the following property to your application properties: Properties
Yaml
When a custom management context path is configured, the “discovery page” automatically moves from 13.2.6. CORS SupportCross-origin resource sharing (CORS) is a W3C specification that lets you specify in a flexible way what kind of cross-domain requests are authorized. If you use Spring MVC or Spring WebFlux, you can configure Actuator’s web endpoints to support such scenarios. CORS support is disabled by default and is only enabled once you have set the Properties
Yaml
13.2.7. Implementing Custom EndpointsIf you add a The following example exposes a read operation that returns a custom object: Java
Kotlin
You can also write technology-specific endpoints by using You can write technology-specific extensions by using Finally, if you need access to web-framework-specific functionality, you can implement servlet or Spring Receiving InputOperations on an endpoint receive input through their parameters. When exposed over the web, the values for these parameters are taken from the URL’s query parameters and from the JSON request body. When exposed over JMX, the parameters are mapped to the parameters of the MBean’s operations. Parameters are required by default. They
can be made optional by annotating them with either You can map each root property in the JSON request body to a parameter of the endpoint. Consider the following JSON request body:
You can use this to invoke a write operation that takes Java
Kotlin
Input Type Conversion The parameters passed to endpoint operation methods are, if necessary, automatically converted to the required type. Before calling an operation method, the input received over JMX or HTTP is converted to the required
types by using an instance of Custom Web EndpointsOperations on an Web Endpoint Request Predicates A request predicate is automatically generated for each operation on a web-exposed endpoint. Path The path of the predicate is determined by the ID of the endpoint and the base path of the web-exposed endpoints. The default base path is You can further
customize the path by annotating one or more parameters of the operation method with HTTP method The HTTP method of the predicate is determined by the operation type, as shown in the following table:
Consumes For a Produces The If the operation
method returns Web Endpoint Response Status The default response status for an endpoint operation depends on the operation type (read, write, or delete) and what, if anything, the operation returns. If a If a If an operation is invoked without a required parameter or with a parameter that cannot be converted to the required type, the operation method is not called, and the response status will be 400 (Bad Request). Web Endpoint Range Requests You can use an HTTP range request to request part of an HTTP
resource. When using Spring MVC or Spring Web Flux, operations that return a
Web Endpoint Security An operation on a web endpoint or a web-specific endpoint extension can receive the current Servlet EndpointsA servlet can be exposed as an endpoint by implementing a class annotated with Controller EndpointsYou can use 13.2.8. Health InformationYou can use health information to check the status of your running application. It is often used by monitoring software to
alert someone when a production system goes down. The information exposed by the
The default value is
Health information is collected from the content of a A By default, the final system health is derived by a
Writing Custom HealthIndicatorsTo provide custom health information, you can register Spring beans that implement the
Java
Kotlin
In addition to Spring Boot’s predefined For example, assume a new Properties
Yaml
The HTTP status code in the response reflects the overall health status. By default, Properties
Yaml
The following table shows the default status mappings for the built-in statuses:
Reactive Health IndicatorsFor reactive applications, such as those that use Spring WebFlux,
To provide custom health information from a reactive API, you can register Spring beans that implement the Java
Kotlin
Health GroupsIt is sometimes useful to organize health indicators into groups that you can use for different purposes. To create a health indicator group, you can use the Properties
Yaml
Similarly, to create a group that excludes the database indicators from the group and includes all the other indicators, you can define the following: Properties
Yaml
By default, groups inherit the same Properties
Yaml
A health group can also include/exclude a
In the example above, the Health groups can be made available at an additional path on either the main or management port. This is useful in cloud environments such as Kubernetes, where it is quite common to use a separate management port for the actuator endpoints for security purposes. Having a separate port could lead to unreliable health checks because the main application might not work properly even if the health check is successful. The health group can be configured with an additional path as follows:
This would make the DataSource HealthThe 13.2.9. Kubernetes ProbesApplications deployed on Kubernetes can provide information about their internal state with Container Probes. Depending on your Kubernetes configuration, the kubelet calls those probes and reacts to the result. By default, Spring Boot manages your
Application Availability State. If deployed in a Kubernetes environment, actuator gathers the “Liveness” and “Readiness” information from the You can then configure your Kubernetes infrastructure with the following endpoint information:
These health groups are automatically enabled only if the application runs in a Kubernetes environment. You can enable them in any environment by using the
If your Actuator endpoints are deployed on a separate management context, the endpoints do not use the same web infrastructure (port, connection pools, framework components) as the main application. In this case, a probe check could be successful even if the main application does not work properly (for example, it cannot accept new connections). For this reason, is it a good idea to make the
This would make Checking External State With Kubernetes ProbesActuator configures the “liveness” and “readiness” probes as Health Groups. This means that all the health groups features are available for them. You can, for example, configure additional Health Indicators: Properties
Yaml
By default, Spring Boot does not add other health indicators to these groups. The “liveness” probe should not depend on health checks for external systems. If the liveness state of an application is broken, Kubernetes tries to solve that problem by restarting the application instance. This means that if an external system (such as a database, a Web API, or an external cache) fails, Kubernetes might restart all application instances and create cascading failures. As for the “readiness” probe, the choice of checking external systems must be made carefully by the application developers. For this reason, Spring Boot does not include any additional health checks in the readiness probe. If the readiness state of an application instance is unready, Kubernetes does not route traffic to that instance. Some external systems might not be shared by application instances, in which case they could be included in a readiness probe. Other external systems might not be essential to the application (the application could have circuit breakers and fallbacks), in which case they definitely should not be included. Unfortunately, an external system that is shared by all application instances is common, and you have to make a judgement call: Include it in the readiness probe and expect that the application is taken out of service when the external service is down or leave it out and deal with failures higher up the stack, perhaps by using a circuit breaker in the caller.
Also, if an application uses Kubernetes autoscaling, it may react differently to applications being taken out of the load-balancer, depending on its autoscaler configuration. Application Lifecycle and Probe StatesAn important aspect of the Kubernetes Probes support is its consistency with the application lifecycle. There is a significant difference between the The following tables show the When a Spring Boot application starts:
When a Spring Boot application shuts down:
13.2.10. Application InformationApplication information exposes various information collected from all
Auto-configured InfoContributorsWhen appropriate, Spring auto-configures the following
Whether an individual contributor is enabled is controlled by its With no prerequisites to indicate that they should be enabled, the The Custom Application InformationWhen the Properties
Yaml
Git Commit InformationAnother useful feature of the
By default, the endpoint exposes Properties
Yaml
To disable the git commit information from
the Properties
Yaml
Build InformationIf a Java InformationThe OS InformationThe Writing Custom InfoContributorsTo provide custom application information, you can register Spring beans
that implement the The following example contributes an Java
Kotlin
If you reach the
13.3. Monitoring and Management Over HTTPIf you are developing a web application, Spring Boot Actuator auto-configures all enabled endpoints to be exposed over HTTP. The default convention is to use the
13.3.1. Customizing the Management Endpoint PathsSometimes, it is useful to customize the prefix for the management endpoints. For example, your application might already use Properties
Yaml
The preceding
If you want to map endpoints to a different path, you can use the The following example remaps Properties
Yaml
13.3.2. Customizing the Management Server PortExposing management endpoints by using the default HTTP port is a sensible choice for cloud-based deployments. If, however, your application runs inside your own data center, you may prefer to expose endpoints by using a different HTTP port. You can set the Properties
Yaml
13.3.3. Configuring Management-specific SSLWhen configured to use a custom port, you can also configure the management server with its own SSL by using the various Properties
Yaml
Alternatively, both the main server and the management server can use SSL but with different key stores, as follows: Properties
Yaml
13.3.4. Customizing the Management Server AddressYou can customize the address on which the management endpoints are available by setting the
The following example Properties
Yaml
13.3.5. Disabling HTTP EndpointsIf you do not want to expose endpoints over HTTP, you can
set the management port to Properties
Yaml
You can also achieve this by using the Properties
Yaml
13.4. Monitoring and Management over JMXJava Management Extensions (JMX) provide a standard mechanism to monitor and manage applications. By default, this feature is not enabled. You can turn it on by setting the If your platform provides a
standard By default, Spring Boot also exposes management endpoints as JMX MBeans under the 13.4.1. Customizing MBean NamesThe name of the MBean is usually generated from the If your application contains more than one Spring You can also customize the JMX domain under which endpoints are exposed. The following settings show an example of doing so in Properties
Yaml
13.4.2. Disabling JMX EndpointsIf you do not want to
expose endpoints over JMX, you can set the Properties
Yaml
13.4.3. Using Jolokia for JMX over HTTPJolokia is a JMX-HTTP bridge that provides an alternative method of accessing JMX beans. To use
Jolokia, include a dependency to
You can then expose the Jolokia endpoint by adding
Customizing JolokiaJolokia has a number of settings that you would traditionally configure by setting servlet parameters. With Spring Boot, you can use your Properties
Yaml
Disabling JolokiaIf you use Jolokia but do not want Spring Boot to configure it, set the Properties
Yaml
13.5. LoggersSpring Boot Actuator includes the ability to view and configure the log levels of your application at runtime. You can view either the entire list or an individual logger’s configuration, which is made up of both the explicitly configured logging level as well as the effective logging level given to it by the logging framework. These levels can be one of:
13.5.1. Configure a LoggerTo configure a given logger,
13.6. MetricsSpring Boot Actuator provides dependency management and auto-configuration for Micrometer, an application metrics facade that supports numerous monitoring systems, including:
13.6.1. Getting startedSpring Boot auto-configures a composite Most registries share common features. For instance, you can disable a particular registry even if the Micrometer registry implementation is on the classpath. The following example disables Datadog: Properties
Yaml
You can also disable all registries unless stated otherwise by the registry-specific property, as the following example shows: Properties
Yaml
Spring Boot
also adds any auto-configured registries to the global static composite registry on the Properties
Yaml
You can register any number of Java
Kotlin
You can apply customizations to particular registry implementations by being more specific about the generic type: Java
Kotlin
13.6.2. Supported Monitoring SystemsThis section briefly describes each of the supported monitoring systems. AppOpticsBy default, the AppOptics registry periodically pushes metrics to Properties
Yaml
AtlasBy default, metrics are exported to Atlas running on your local machine. You can provide the location of the Atlas server: Properties
Yaml
DatadogA Datadog registry periodically pushes metrics to datadoghq. To export metrics to Datadog, you must provide your API key: Properties
Yaml
If you additionally provide an application key (optional), then metadata such as meter descriptions, types, and base units will also be exported: Properties
Yaml
By default, metrics are sent to the Datadog US site ( Properties
Yaml
You can also change the interval at which metrics are sent to Datadog: Properties
Yaml
DynatraceDynatrace offers two metrics ingest APIs, both of which are implemented for Micrometer. You can find the Dynatrace documentation on Micrometer metrics ingest
here. Configuration properties in the v2 API You can use the v2 API in two ways. Auto-configuration Dynatrace auto-configuration is available for hosts that are monitored by the OneAgent or by the Dynatrace Operator for Kubernetes. Local OneAgent: If a OneAgent is running on the host, metrics are automatically exported to the local OneAgent ingest endpoint. The ingest endpoint forwards the metrics to the Dynatrace backend. Dynatrace Kubernetes Operator: When running in Kubernetes with the Dynatrace Operator installed, the registry will automatically pick up your endpoint URI and API token from the operator instead. This is the default behavior and requires no special setup beyond a dependency on Manual configuration If no auto-configuration is available, the endpoint of the Metrics v2 API and an API token are required. The
API token must have the “Ingest metrics” ( The URL of the Metrics API v2 ingest endpoint is different according to your deployment option:
The example below configures metrics export using the Properties
Yaml
When using the Dynatrace v2 API, the following optional features are available (more details can be found in the Dynatrace documentation):
It is possible to not specify a URI and API token, as shown in the following example. In this scenario, the automatically configured endpoint is used: Properties
Yaml
v1 API (Legacy) The Dynatrace v1 API metrics registry pushes metrics to the configured URI periodically by using the Timeseries v1 API. For backwards-compatibility with existing setups, when Properties
Yaml
For the v1 API, you must specify the base environment URI without a path, as the v1 endpoint path is added automatically. Version-independent Settings In addition to the API endpoint and token, you can also change the interval at which metrics are sent to Dynatrace. The default export interval is Properties
Yaml
ElasticBy default, metrics are exported to Elastic running on your local machine. You can provide the location of the Elastic server to use by using the following property: Properties
Yaml
GangliaBy default, metrics are exported to Ganglia running on your local machine. You can provide the Ganglia server host and port, as the following example shows: Properties
Yaml
GraphiteBy default, metrics are exported to Graphite running on your local machine. You can provide the Graphite server host and port, as the following example shows: Properties
Yaml
HumioBy default, the Humio registry periodically pushes metrics to cloud.humio.com. To export metrics to SaaS Humio, you must provide your API token: Properties
Yaml
You should also configure one or more tags to identify the data source to which metrics are pushed: Properties
Yaml
InfluxBy default, metrics are exported to an
Influx v1 instance running on your local machine with the default configuration. To export metrics to InfluxDB v2, configure the Properties
Yaml
JMXMicrometer provides a hierarchical mapping to JMX, primarily as a cheap and portable way to view metrics locally. By default, metrics are exported to the Properties
Yaml
KairosDBBy default, metrics are exported to KairosDB running on your local machine. You can provide the location of the KairosDB server to use by using: Properties
Yaml
New RelicA New Relic registry periodically pushes metrics to New Relic. To export metrics to New Relic, you must provide your API key and account ID: Properties
Yaml
You can also change the interval at which metrics are sent to New Relic: Properties
Yaml
By default, metrics are published through REST calls, but you can also use the Java Agent API if you have it on the classpath: Properties
Yaml
Finally, you can take full control by defining your own PrometheusPrometheus expects to scrape or poll individual application instances for metrics. Spring Boot provides an
actuator endpoint at
The following example
Prometheus Exemplars are also supported. To enable this feature, a For ephemeral or batch jobs that may not exist long enough to be scraped, you can use Prometheus Pushgateway support to expose the metrics to Prometheus. To enable Prometheus Pushgateway support, add the following dependency to your project:
When the Prometheus Pushgateway dependency is present on the classpath and the You can tune the
SignalFxSignalFx registry periodically pushes metrics to SignalFx. To export metrics to SignalFx, you must provide your access token: Properties
Yaml
You can also change the interval at which metrics are sent to SignalFx: Properties
Yaml
SimpleMicrometer ships with a simple, in-memory backend that is automatically used as a fallback if no other registry is configured. This lets you see what metrics are collected in the metrics endpoint. The in-memory backend disables itself as soon as you use any other available backend. You can also disable it explicitly: Properties
Yaml
StackdriverThe Stackdriver registry periodically pushes metrics to Stackdriver. To export metrics to SaaS Stackdriver, you must provide your Google Cloud project ID: Properties
Yaml
You can also change the interval at which metrics are sent to Stackdriver: Properties
Yaml
StatsDThe StatsD registry eagerly pushes metrics over UDP to a StatsD agent. By default, metrics are exported to a StatsD agent running on your local machine. You can provide the StatsD agent host, port, and protocol to use by using: Properties
Yaml
You can also change the StatsD line protocol to use (it defaults to Datadog): Properties
Yaml
WavefrontThe Wavefront registry periodically pushes metrics to Wavefront. If you are exporting metrics to Wavefront directly, you must provide your API token: Properties
Yaml
Alternatively, you can use a Wavefront sidecar or an internal proxy in your environment to forward metrics data to the Wavefront API host:
Properties
Yaml
You can also change the interval at which metrics are sent to Wavefront: Properties
Yaml
13.6.3. Supported Metrics and MetersSpring Boot provides automatic meter registration for a wide variety of technologies. In most situations, the defaults provide sensible metrics that can be published to any of the supported monitoring systems. JVM MetricsAuto-configuration enables JVM Metrics by using core Micrometer classes. JVM metrics are published under the The following JVM metrics are provided:
System MetricsAuto-configuration enables system metrics by using core Micrometer
classes. System metrics are published under the The following system metrics are provided:
Application Startup MetricsAuto-configuration exposes application startup time metrics:
Metrics are tagged by the fully qualified name of the application class. Logger MetricsAuto-configuration enables the event metrics for both Logback and Log4J2. The details are published under the Task Execution and Scheduling MetricsAuto-configuration enables the instrumentation of all available Spring MVC MetricsAuto-configuration enables the instrumentation of all requests handled by Spring MVC controllers and functional handlers. By default, metrics are generated with the name,
By default, Spring MVC related metrics are tagged with the following information:
To add to the default tags, provide one or more By default, all requests are handled. To customize the filter, provide a Spring WebFlux MetricsAuto-configuration enables the instrumentation of all requests handled by Spring WebFlux controllers and functional handlers. By default, metrics are generated with the name,
By default, WebFlux related metrics are tagged with the following information:
To add to the default tags, provide one or more beans that implement Jersey Server MetricsAuto-configuration enables the instrumentation of all requests handled by the Jersey JAX-RS
implementation. By default, metrics are generated with the name,
By default, Jersey server metrics are tagged with the following information:
To customize the tags, provide a HTTP Client MetricsSpring Boot Actuator manages the instrumentation of both
You can also manually apply the customizers responsible for this instrumentation, namely By default, metrics are generated with the name, By default, metrics generated by an instrumented client are tagged with the following information:
To customize the tags, and depending on your choice of client, you can provide a If you do not want to record metrics for all Tomcat MetricsAuto-configuration enables the instrumentation of Tomcat only when an Tomcat metrics are published under the Cache MetricsAuto-configuration enables the instrumentation of all available The following cache libraries are supported:
Metrics are tagged by the name of the cache and by the name of the
Spring GraphQL MetricsAuto-configuration enables the instrumentation of GraphQL queries, for any supported transport. Spring Boot records a
A single GraphQL query can involve many
The A single response can contain many GraphQL errors, counted by the
DataSource MetricsAuto-configuration enables the instrumentation of all available Metrics are also tagged by the name of the
Also, Hikari-specific metrics are exposed with a Hibernate MetricsIf Metrics are also tagged by the name of the To enable statistics, the standard JPA property Properties
Yaml
Spring Data Repository MetricsAuto-configuration enables the instrumentation of all Spring Data
By default, repository invocation related metrics are tagged with the following information:
To replace the default tags, provide a RabbitMQ MetricsAuto-configuration enables the instrumentation of all available RabbitMQ connection factories with a metric named Spring Integration MetricsSpring Integration automatically provides Micrometer support whenever a Kafka MetricsAuto-configuration registers a MongoDB MetricsThis section briefly describes the available metrics for MongoDB. MongoDB Command Metrics Auto-configuration registers a A timer metric named
To replace the default metric tags, define a Java
Kotlin
To disable the auto-configured command metrics, set the following property: Properties
Yaml
MongoDB Connection Pool Metrics Auto-configuration registers a The following gauge metrics are created for the connection pool:
Each metric is tagged with the following information by default:
To replace the default metric tags, define a Java
Kotlin
To disable the auto-configured connection pool metrics, set the following property: Properties
Yaml
Jetty MetricsAuto-configuration binds metrics for Jetty’s @Timed Annotation SupportYou can use the For example, the following code shows how you can use the annotation to instrument all request mappings in a
Java
Kotlin
If you want only to instrument a single mapping, you can use the annotation on the method instead of the class: Java
Kotlin
You can also combine class-level and method-level annotations if you want to change the timing details for a specific method: Java
Kotlin
Redis MetricsAuto-configuration registers a 13.6.4. Registering Custom MetricsTo register custom metrics, inject Java
Kotlin
If your metrics depend on other beans, we recommend that you use a Java
Kotlin
Using a
13.6.5. Customizing Individual MetricsIf you need to apply customizations to specific For example, if you want to rename the
Java
Kotlin
Common TagsCommon tags are generally used for dimensional drill-down on the operating environment, such as host, instance, region, stack, and others. Commons tags are applied to all meters and can be configured, as the following example shows: Properties
Yaml
The preceding example adds
Per-meter PropertiesIn addition to Properties
Yaml
The following properties allow per-meter customization: Table 9. Per-meter customizations
For more details on the concepts behind 13.6.6. Metrics EndpointSpring Boot
provides a Navigating to
You can also add any number of
13.7. AuditingOnce Spring Security is in play, Spring Boot Actuator has a flexible audit framework that publishes events (by default, “authentication success”, “failure” and “access denied” exceptions). This feature can be very useful for reporting and for implementing a lock-out policy based on authentication failures. You can enable auditing by providing a bean of type 13.7.1. Custom AuditingTo customize published security events, you can provide your own implementations of You can also use the audit services for your own business events. To do so, either inject the 13.8. HTTP TracingYou can enable HTTP Tracing by providing a bean of type You can use the 13.8.1. Custom HTTP tracingTo customize the items that are included in each trace, use the 13.9. Process MonitoringIn the
By default, these writers are not activated, but you can enable them:
13.9.1. Extending ConfigurationIn the org.springframework.context.ApplicationListener=\ org.springframework.boot.context.ApplicationPidFileWriter,\ org.springframework.boot.web.context.WebServerPortFileWriter 13.9.2. Programmatically Enabling Process MonitoringYou can also activate a listener by invoking the 13.10. Cloud Foundry SupportSpring Boot’s actuator module includes additional support that is activated when you deploy to a compatible Cloud Foundry instance. The The extended support lets Cloud Foundry management UIs (such as the web application that you can use to view deployed applications) be augmented with Spring Boot actuator information. For example, an application status page can include full health information instead of the typical “running” or “stopped” status.
13.10.1. Disabling Extended Cloud Foundry Actuator SupportIf you want to fully disable the Properties
Yaml
13.10.2. Cloud Foundry Self-signed CertificatesBy default, the security verification for Properties
Yaml
13.10.3. Custom Context PathIf the server’s context-path has been configured to anything other than If you expect the Cloud Foundry endpoints to always be available at Java
Kotlin
13.11. What to Read NextYou might want to read about graphing tools such as Graphite. 14. Deploying Spring Boot ApplicationsSpring Boot’s flexible packaging options provide a great deal of choice when it comes to deploying your application. You can deploy Spring Boot applications to a variety of cloud platforms, to virtual/real machines, or make them fully executable for Unix systems. This section covers some of the more common deployment scenarios. 14.1. Deploying to the CloudSpring Boot’s executable jars are ready-made for most popular cloud PaaS (Platform-as-a-Service) providers. These providers tend to require that you “bring your own container”. They manage application processes (not Java applications specifically), so they need an intermediary layer that adapts your application to the cloud’s notion of a running process. Two popular cloud providers, Heroku and Cloud Foundry, employ a “buildpack” approach. The buildpack wraps your deployed code
in whatever is needed to start your application. It might be a JDK and a call to Ideally, your application, like a Spring Boot executable jar, has everything that it needs to run packaged within it. In this section, we look at what it takes to get the application that we developed in the “Getting Started” section up and running in the Cloud. 14.1.1. Cloud FoundryCloud Foundry provides default buildpacks that come into play if no other buildpack is specified. The Cloud Foundry Java buildpack has excellent support for Spring applications, including Spring Boot. You can deploy stand-alone executable jar applications as well as
traditional Once you have built your application (by using, for example,
See the At this point, Uploading acloudyspringtime... OK Preparing to start acloudyspringtime... OK -----> Downloaded app package (8.9M) -----> Java Buildpack Version: v3.12 (offline) | https://github.com/cloudfoundry/java-buildpack.git#6f25b7e -----> Downloading Open Jdk JRE 1.8.0_121 from https://java-buildpack.cloudfoundry.org/openjdk/trusty/x86_64/openjdk-1.8.0_121.tar.gz (found in cache) Expanding Open Jdk JRE to .java-buildpack/open_jdk_jre (1.6s) -----> Downloading Open JDK Like Memory Calculator 2.0.2_RELEASE from https://java-buildpack.cloudfoundry.org/memory-calculator/trusty/x86_64/memory-calculator-2.0.2_RELEASE.tar.gz (found in cache) Memory Settings: -Xss349K -Xmx681574K -XX:MaxMetaspaceSize=104857K -Xms681574K -XX:MetaspaceSize=104857K -----> Downloading Container Certificate Trust Store 1.0.0_RELEASE from https://java-buildpack.cloudfoundry.org/container-certificate-trust-store/container-certificate-trust-store-1.0.0_RELEASE.jar (found in cache) Adding certificates to .java-buildpack/container_certificate_trust_store/truststore.jks (0.6s) -----> Downloading Spring Auto Reconfiguration 1.10.0_RELEASE from https://java-buildpack.cloudfoundry.org/auto-reconfiguration/auto-reconfiguration-1.10.0_RELEASE.jar (found in cache) Checking status of app 'acloudyspringtime'... 0 of 1 instances running (1 starting) ... 0 of 1 instances running (1 starting) ... 0 of 1 instances running (1 starting) ... 1 of 1 instances running (1 running) App started Congratulations! The application is now live! Once your application is live, you can verify the status of the deployed application by using the
Once Cloud Foundry acknowledges that your application has been deployed, you should be able to find the application at the URI given. In the preceding example, you could find it at Binding to ServicesBy default, metadata about the running application as well as service connection information is exposed to the application as environment variables (for example: Environment variables do not always make for the easiest API, so Spring Boot automatically extracts them and flattens the data into properties that can be accessed through Spring’s Java
Kotlin
All Cloud Foundry properties are prefixed with
14.1.2. KubernetesSpring Boot auto-detects Kubernetes deployment environments by checking the environment for Kubernetes Container LifecycleWhen Kubernetes deletes an application instance, the shutdown process involves several subsystems concurrently: shutdown hooks, unregistering the service, removing the instance from the load-balancer… Because this shutdown processing happens in parallel (and due to the nature of distributed systems), there is a window during which traffic can be routed to a pod that has also begun its shutdown processing. You can configure a sleep execution in a preStop handler to avoid requests being routed to a pod that has already begun shutting down. This sleep should be long enough for new requests to stop being routed to the pod and its duration will vary from deployment to deployment. The preStop handler can be configured by using the PodSpec in the pod’s configuration file as follows:
Once the pre-stop hook has completed, SIGTERM will be sent to the container and graceful shutdown will begin, allowing any remaining in-flight requests to complete.
14.1.3. HerokuHeroku is another popular PaaS platform. To customize Heroku builds, you provide a You
must configure your application to listen on the correct port. The following example shows the web: java -Dserver.port=$PORT -jar target/demo-0.0.1-SNAPSHOT.jar Spring Boot makes This should be
everything you need. The most common deployment workflow for Heroku deployments is to Which will result in the following: Initializing repository, done. Counting objects: 95, done. Delta compression using up to 8 threads. Compressing objects: 100% (78/78), done. Writing objects: 100% (95/95), 8.66 MiB | 606.00 KiB/s, done. Total 95 (delta 31), reused 0 (delta 0) -----> Java app detected -----> Installing OpenJDK 1.8... done -----> Installing Maven 3.3.1... done -----> Installing settings.xml... done -----> Executing: mvn -B -DskipTests=true clean install [INFO] Scanning for projects... Downloading: https://repo.spring.io/... Downloaded: https://repo.spring.io/... (818 B at 1.8 KB/sec) .... Downloaded: https://s3pository.heroku.com/jvm/... (152 KB at 595.3 KB/sec) [INFO] Installing /tmp/build_0c35a5d2-a067-4abc-a232-14b1fb7a8229/target/... [INFO] Installing /tmp/build_0c35a5d2-a067-4abc-a232-14b1fb7a8229/pom.xml ... [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 59.358s [INFO] Finished at: Fri Mar 07 07:28:25 UTC 2014 [INFO] Final Memory: 20M/493M [INFO] ------------------------------------------------------------------------ -----> Discovering process types Procfile declares types -> web -----> Compressing... done, 70.4MB -----> Launching... done, v6 https://agile-sierra-1405.herokuapp.com/ deployed to Heroku To [email protected]:agile-sierra-1405.git * [new branch] main -> main 14.1.5. Amazon Web Services (AWS)Amazon Web Services offers multiple ways to install Spring Boot-based applications, either as traditional web applications (war) or as executable jar files with an embedded web server. The options include:
Each has different features and pricing models. In this document, we describe to approach using AWS Elastic Beanstalk. AWS Elastic BeanstalkAs described in the official Elastic Beanstalk Java guide, there are two main options to deploy a Java application. You can either use the “Tomcat Platform” or the “Java SE platform”. Using the Tomcat Platform This option applies to Spring Boot projects that produce a war file. No special configuration is required. You need only follow the official guide. Using the Java SE Platform This option applies to Spring Boot projects that produce a jar file and run an embedded web container. Elastic Beanstalk environments run an nginx instance on port 80 to proxy the actual application, running on port 5000. To configure it, add the following line to your
SummaryThis is one of the easiest ways to get to AWS, but there are more things to cover, such as how to integrate Elastic Beanstalk into any CI / CD tool, use the Elastic Beanstalk Maven plugin instead of the CLI, and others. There is a blog post covering these topics more in detail. 14.1.6. CloudCaptain and Amazon Web ServicesCloudCaptain works by turning your Spring Boot executable jar or war into a minimal VM image that can be deployed unchanged either on VirtualBox or on AWS. CloudCaptain comes with deep integration for Spring Boot and uses the information from your Spring Boot configuration file to automatically configure ports and health check URLs. CloudCaptain leverages this information both for the images it produces as well as for all the resources it provisions (instances, security groups, elastic load balancers, and so on). Once you have created
a CloudCaptain account, connected it to your AWS account, installed the latest version of the CloudCaptain Client, and ensured that the application has been built by Maven or Gradle (by using, for example,
At this point, CloudCaptain creates an image for your application, uploads it, and configures and starts the necessary resources on AWS, resulting in output similar to the following example: Fusing Image for myapp-1.0.jar ... Image fused in 00:06.838s (53937 K) -> axelfontaine/myapp:1.0 Creating axelfontaine/myapp ... Pushing axelfontaine/myapp:1.0 ... Verifying axelfontaine/myapp:1.0 ... Creating Elastic IP ... Mapping myapp-axelfontaine.boxfuse.io to 52.28.233.167 ... Waiting for AWS to create an AMI for axelfontaine/myapp:1.0 in eu-central-1 (this may take up to 50 seconds) ... AMI created in 00:23.557s -> ami-d23f38cf Creating security group boxfuse-sg_axelfontaine/myapp:1.0 ... Launching t2.micro instance of axelfontaine/myapp:1.0 (ami-d23f38cf) in eu-central-1 ... Instance launched in 00:30.306s -> i-92ef9f53 Waiting for AWS to boot Instance i-92ef9f53 and Payload to start at https://52.28.235.61/ ... Payload started in 00:29.266s -> https://52.28.235.61/ Remapping Elastic IP 52.28.233.167 to i-92ef9f53 ... Waiting 15s for AWS to complete Elastic IP Zero Downtime transition ... Deployment completed successfully. axelfontaine/myapp:1.0 is up and running at https://myapp-axelfontaine.boxfuse.io/ Your application should now be up and running on AWS. 14.1.7. Azure14.1.8. Google CloudGoogle Cloud has several options that can be used to launch Spring Boot applications. The easiest to get started with is probably App Engine, but you could also find ways to run Spring Boot in a container with Container Engine or on a virtual machine with Compute Engine. To run in App Engine, you can create a project in the UI first, which sets up a unique identifier for you and also sets up HTTP routes. Add a Java app to the project and leave it empty and then use the Google Cloud SDK to push your Spring Boot app into that slot from the command line or CI build. App Engine Standard requires you to use WAR packaging. Follow these steps to deploy App Engine Standard application to Google Cloud. Alternatively, App Engine Flex requires you to create an
You can deploy the app (for example, with a Maven plugin) by adding the project ID to the build configuration, as shown in the following example:
Then deploy with 14.2. Installing Spring Boot ApplicationsIn addition to running Spring Boot applications by using
To create a ‘fully executable’ jar with Maven, use the following plugin configuration:
The following example shows the equivalent Gradle configuration:
You can then run your application by typing 14.2.1. Supported Operating SystemsThe default script supports most Linux distributions and is tested on CentOS and Ubuntu. Other platforms, such as OS X and FreeBSD, require the use of a custom 14.2.2. Unix/Linux ServicesSpring Boot application can be easily started as Unix/Linux services by using either Installation as an init.d Service (System V)If you configured Spring Boot’s Maven or Gradle plugin to generate a fully executable jar, and you do not use a custom The script supports the following features:
Assuming that you have a Spring Boot application installed in
Once installed, you can start and stop the service in the usual way. For example, on a Debian-based system, you could start it with the following command:
You can also flag the application to start automatically by using your standard operating system tools. For example, on Debian, you could use the following command:
Securing an init.d Service
When executed as root, as is the case when root is being used to start an init.d service, the default executable script runs the application as the user specified in the
In this case, the default executable script runs the application as the
You should also take steps to prevent the modification of your application’s jar file. Firstly, configure its permissions so that it cannot be written and can only be read or executed by its owner, as shown in the following example: Second, you should also take steps to limit the damage if your application or the account that is running it is compromised. If an attacker does gain access, they could make the jar file writable and change its contents. One way to
protect against this is to make it immutable by using
This will prevent any user, including root, from modifying the jar. If root is used to control the application’s service and you use a
Installation as a systemd Service
Assuming that you have a Spring Boot application installed in [Unit] Description=myapp After=syslog.target [Service] User=myapp ExecStart=/var/myapp/myapp.jar SuccessExitStatus=143 [Install] WantedBy=multi-user.target
Note that, unlike when running as an To flag the application to start automatically on system boot, use the following command:
Run Customizing the Startup ScriptThe default embedded startup script written by the Maven or Gradle plugin can be customized in a number of ways. For most people,
using the default script along with a few customizations is usually enough. If you find you cannot customize something that you need to, use the Customizing the Start Script When It Is Written It often makes sense to customize elements of the start script as it is written into the jar file. For example, init.d scripts can provide a “description”. Since you know the description up front (and it need not change), you may as well provide it when the jar is generated. The following property substitutions are supported with the default script:
Customizing a Script When It Runs For items of the script that need to be customized after the jar has been written, you can use environment variables or a config file. The following environment properties are supported with the default script:
With the exception of myapp.conf JAVA_OPTS=-Xmx1024M LOG_FOLDER=/custom/log/folder
14.2.3. Microsoft Windows ServicesA Spring Boot application can be started as a Windows service by using A (separately maintained sample) describes step-by-step how you can create a Windows service for your Spring Boot application. 14.3. What to Read NextSee the Cloud Foundry, Heroku, OpenShift, and Boxfuse web sites for more information about the kinds of features that a PaaS can offer. These are just four of the most popular Java PaaS providers. Since Spring Boot is so amenable to cloud-based deployment, you can freely consider other providers as well. 15. Spring Boot CLIThe Spring Boot CLI is a command line tool that you can use if you want to quickly develop a Spring application. It lets you run Groovy scripts, which means that you have a familiar Java-like syntax without so much boilerplate code. You can also bootstrap a new project or write your own command for it. 15.1. Installing the CLIThe Spring Boot CLI (Command-Line Interface) can be installed manually by using SDKMAN! (the SDK Manager) or by using Homebrew or MacPorts if you are an OSX user. See Installing the Spring Boot CLI in the “Getting started” section for comprehensive installation instructions. 15.2. Using the CLIOnce you have installed the CLI, you can run it by typing
You can type
The
15.2.1. Running Applications With the CLIYou can compile and run Groovy source code by using the
The following example shows a “hello world” web application written in Groovy: hello.groovy
To compile and run the application, type the following command:
To pass command-line arguments to the application, use
To set JVM command line arguments, you can use the
Deduced “grab” DependenciesStandard Groovy includes a Spring Boot extends this technique further and tries to deduce which libraries to “grab” based on your code. For example, since the The following items are used as “grab hints”:
Deduced “grab” CoordinatesSpring Boot extends Groovy’s standard
Default Import StatementsTo help reduce the size of your Groovy code, several
Automatic Main MethodUnlike the equivalent Java application, you do not need to include a Custom Dependency ManagementBy default, the CLI uses the dependency management declared in For example, consider the following declaration:
The preceding declaration picks up When you specify multiple BOMs, they are applied in the order in which you declare them, as shown in the following example:
The preceding example indicates that the dependency management in You can use 15.2.2. Applications With Multiple Source FilesYou can use “shell globbing” with all commands that accept file input. Doing so lets you use multiple files from a single directory, as shown in the following example: 15.2.3. Packaging Your ApplicationYou can use the
The resulting jar contains the classes produced by compiling the application and all of the application’s dependencies so that it can then be run by using public/**, resources/**, static/**, templates/**, META-INF/**, * The default excludes are as follows: .*, repository/**, build/**, target/**, **/*.jar, **/*.groovy Type 15.2.4. Initialize a New ProjectThe
The preceding example creates a
The
15.2.5. Using the Embedded ShellSpring Boot includes command-line completion scripts for the BASH and zsh shells. If you do not use either of these shells (perhaps you are a Windows user), you can
use the
From inside the embedded shell, you can run other commands directly:
The embedded shell supports ANSI color output as well as 15.2.6. Adding Extensions to the CLIYou can add extensions to the CLI by using the
In addition to installing the artifacts identified by the coordinates you supply, all of the artifacts' dependencies are also installed. To uninstall a dependency, use the
It uninstalls the artifacts identified by the coordinates you supply and their dependencies. To uninstall all additional dependencies, you can use the 15.3. Developing Applications With the Groovy Beans DSLSpring Framework 4.0 has native support for a
You can mix class declarations with 15.4. Configuring the CLI With settings.xmlThe Spring Boot CLI uses Maven Resolver, Maven’s dependency
resolution engine, to resolve dependencies. The CLI makes use of the Maven configuration found in
15.5. What to Read NextThere are some sample groovy scripts available from the GitHub repository that you can use to try out the Spring Boot CLI. There is also extensive Javadoc throughout the source code. If you find that you reach the limit of the CLI tool, you probably want to look at converting your application to a full Gradle or Maven built “Groovy project”. The next section covers Spring Boot’s "Build tool plugins", which you can use with Gradle or Maven. Spring Boot provides build tool plugins for Maven and Gradle. The plugins offer a variety of features, including the packaging of executable jars. This section provides more details on both plugins as well as some help should you need to extend an unsupported build system. If you are just getting started, you might want to read “Build Systems” from the “Developing with Spring Boot” section first. 16.1. Spring Boot Maven PluginThe Spring Boot Maven Plugin provides Spring Boot support in Maven, letting you package executable jar or war archives and run an application “in-place”. To use it, you must use Maven 3.2 (or later). See the plugin’s documentation to learn more:
16.2. Spring Boot Gradle PluginThe Spring Boot Gradle Plugin provides Spring Boot support in Gradle, letting you package executable jar or war archives, run Spring Boot applications, and use the dependency management provided by
16.3. Spring Boot AntLib ModuleThe Spring Boot AntLib module provides basic Spring Boot support for Apache Ant. You can use the module to create executable jars. To use the module, you need to declare an additional
You need to remember to start Ant using the
16.3.1. Spring Boot Ant TasksOnce the
Using the “exejar” TaskYou can use the
The following nested elements can be used with the task:
ExamplesThis section shows two examples of Ant tasks. Specify start-class
Detect start-class
16.3.2. Using the “findmainclass” TaskThe
ExamplesThis section contains three examples of using Find and log
Find and set
Override and set
16.4. Supporting Other Build SystemsIf you want to use a build tool other than Maven, Gradle, or Ant, you likely need to develop your own plugin. Executable jars need to follow a specific format and certain entries need to be written in an uncompressed form (see the “executable jar format” section in the appendix for details). The Spring Boot Maven and Gradle plugins both make use of 16.4.1. Repackaging ArchivesTo repackage an existing archive so that it becomes a self-contained executable archive, use 16.4.2. Nested LibrariesWhen repackaging an archive, you can include references to dependency files by using the If your archive already includes libraries, you can use 16.4.3. Finding a Main ClassIf you do not use 16.4.4. Example Repackage ImplementationThe following example shows a typical repackage implementation: Java
Kotlin
16.5. What to Read NextIf you are interested in how the build tool plugins work, you can look at the
If you have specific build-related questions, see the “how-to” guides. 17. “How-to” GuidesThis section provides answers to some common ‘how do I do that…’ questions that often arise when using Spring Boot. Its coverage is not exhaustive, but it does cover quite a lot. If you have a
specific problem that we do not cover here, you might want to check stackoverflow.com to see if someone has already provided an answer. This is also a great place to ask new questions (please use the We are also more than happy to extend this section. If you want to add a ‘how-to’, send us a pull request.
17.1. Spring Boot ApplicationThis section includes topics relating directly to Spring Boot applications. 17.1.1. Create Your Own FailureAnalyzer
17.1.2. Troubleshoot Auto-configurationThe Spring Boot auto-configuration tries its best to “do the right thing”, but sometimes things fail, and it can be hard to tell why. There is a really useful Many more questions can be answered by looking at the source code and the Javadoc. When reading the code, remember the following rules of thumb:
17.1.3. Customize the Environment or ApplicationContext Before It StartsA
The It is also possible to customize the org.springframework.boot.env.EnvironmentPostProcessor=com.example.YourEnvironmentPostProcessor The
implementation can load arbitrary files and add them to the Java
Kotlin
17.1.4. Build an ApplicationContext Hierarchy (Adding a Parent or Root Context)You can use the 17.1.5. Create a Non-web ApplicationNot all Spring applications
have to be web applications (or web services). If you want to execute some code in a 17.2. Properties and ConfigurationThis section includes topics about setting and reading properties and configuration settings and their interaction with Spring Boot applications. 17.2.1. Automatically Expand Properties at Build TimeRather than hardcoding some properties that are also specified in your project’s build configuration, you can automatically expand them by instead using the existing build configuration. This is possible in both Maven and Gradle. Automatic Property Expansion Using MavenYou can automatically expand properties from the Maven project by using resource filtering. If you use the
If you do not use the starter parent, you need to include the following element inside the
You also need to include the following element inside
Automatic Property Expansion Using GradleYou can automatically expand properties from the Gradle project by configuring the Java plugin’s
You can then refer to your Gradle project’s properties by using placeholders, as shown in the following example: Properties
Yaml
17.2.2. Externalize the Configuration of SpringApplicationA Properties
Yaml
Then the Spring Boot banner is not printed on startup, and the application is not starting an embedded web server. Properties defined in external configuration override and replace the values specified with the Java API, with the notable exception of the primary sources. Primary sources
are those provided to the Java
Kotlin
Or to Java
Kotlin
Given the examples above, if we have the following configuration: Properties
Yaml
The actual application will show the banner (as overridden by configuration) and
uses three sources for the
17.2.3. Change the Location of External Properties of an ApplicationBy default, properties from different sources are added to the Spring You can also provide the following System properties (or environment variables) to change the behavior:
No matter what you set in the environment, Spring Boot always loads
17.2.4. Use ‘Short’ Command Line ArgumentsSome people like to use (for example) Yaml
17.2.5. Use YAML for External PropertiesYAML is a superset of JSON and, as such, is a convenient syntax for storing external properties in a hierarchical format, as shown in the following example:
Create a file called The preceding example YAML corresponds to the following
See “Working With YAML” in the ‘Spring Boot features’ section for more information about YAML. 17.2.6. Set the Active Spring ProfilesThe Spring
In Spring Boot, you can also set the active profile in Properties
Yaml
A value set this way is replaced by the System property or environment variable setting but not by the See “Profiles” in the “Spring Boot features” section for more information. 17.2.7. Set the Default Profile NameThe default profile is a profile that is enabled if no profile is active. By default, the name of the default profile is In Spring Boot, you can also set the default profile name in Properties
Yaml
See “Profiles” in the “Spring Boot features” section for more information. 17.2.8. Change Configuration Depending on the EnvironmentSpring Boot supports multi-document YAML and Properties files (see Working With Multi-Document Files for details) which can be activated conditionally based on the active profiles. If a document contains a Properties
Yaml
In the preceding example, the default port is 9000. However, if the Spring profile called ‘development’ is active, then the port is 9001. If ‘production’ is active, then the port is 0.
17.2.9. Discover Built-in Options for External PropertiesSpring Boot binds external properties from A running application with the Actuator features has a The appendix includes an
17.3. Embedded Web ServersEach Spring Boot web application includes an embedded web server. This feature leads to a number of how-to questions, including how to change the embedded server and how to configure the embedded server. This section answers those questions. 17.3.1. Use Another Web ServerMany Spring Boot starters include default embedded containers.
When switching to a different HTTP server, you need to swap the default dependencies for those that you need instead. To help with this process, Spring Boot provides a separate starter for each of the supported HTTP servers. The following Maven example shows how to exclude Tomcat and include Jetty for Spring MVC:
If you wish to use Jetty 10, which does support servlet 4.0, you can do so as shown in the following example:
Note that along with excluding the Tomcat starter, a couple of Jetty9-specific dependencies also need to be excluded. The following Gradle example configures the necessary dependencies and a module replacement to use Undertow in place of Reactor Netty for Spring WebFlux:
17.3.2. Disabling the Web ServerIf your classpath contains the necessary bits to start a web server, Spring Boot will automatically start it. To disable this behavior configure the Properties
Yaml
17.3.3. Change the HTTP PortIn a standalone application, the main HTTP port defaults to To switch off the HTTP endpoints completely but still create a 17.3.4. Use a Random Unassigned HTTP PortTo scan for a free port (using OS natives to prevent clashes) use 17.3.5. Discover the HTTP Port at RuntimeYou can access the port the server is running on from log output or from the Tests that
use Java
Kotlin
17.3.6. Enable HTTP Response CompressionHTTP response compression is supported by Jetty, Tomcat, Reactor Netty, and Undertow. It can be enabled in Properties
Yaml
By default, responses must be at least 2048 bytes in length for compression to be performed. You can configure this behavior by setting the By default, responses are compressed only if their content type is one of the following:
You can configure this
behavior by setting the 17.3.7. Configure SSLSSL can be configured declaratively by setting the various Properties
Yaml
The following example shows setting SSL properties using PEM-encoded certificate and private key files: Properties
Yaml
See Using
configuration such as the preceding example means the application no longer supports a plain HTTP connector at port 8080. Spring Boot does not support the configuration of both an HTTP connector and an HTTPS connector through 17.3.8. Configure HTTP/2You can enable HTTP/2 support in your Spring Boot application with the HTTP/2 With TomcatSpring Boot ships by default with Tomcat 9.0.x which supports The library directory must be made available, if not
already, to the JVM library path. You can do so with a JVM argument such as Starting Tomcat 9.0.x on JDK 8 with HTTP/2 and SSL enabled but without that native support logs the following error: ERROR 8787 --- [ main] o.a.coyote.http11.Http11NioProtocol : The upgrade handler [org.apache.coyote.http2.Http2Protocol] for [h2] only supports upgrade via ALPN but has been configured for the ["https-jsse-nio-8443"] connector that does not support ALPN. This error is not fatal, and the application still starts with HTTP/1.1 SSL support. HTTP/2 With JettyFor HTTP/2 support, Jetty requires the additional
HTTP/2 With Reactor NettyThe Spring Boot manages the version for the HTTP/2 With UndertowAs of Undertow 1.4.0+, both 17.3.9. Configure the Web ServerGenerally, you should first consider using one of the many available configuration keys and customize your web server by adding new entries in your The previous sections covered already many common use cases, such as compression, SSL or HTTP/2. However, if a configuration key does not exist for your use case, you should then look at
The example below is for Tomcat with the Java
Kotlin
Once you have got access to a In addition Spring Boot provides:
As a last resort, you can also declare your own 17.3.10. Add a Servlet, Filter, or Listener to an ApplicationIn a servlet stack application, that is with the
Add a Servlet, Filter, or Listener by Using a Spring BeanTo add a In the case of filters and servlets, you can also add mappings and init parameters by adding a
Disable Registration of a Servlet or Filter As described earlier, any
Java
Kotlin
Add Servlets, Filters, and Listeners by Using Classpath Scanning
17.3.11. Configure Access LoggingAccess logs can be configured for Tomcat, Undertow, and Jetty through their respective namespaces. For instance, the following settings log access on Tomcat with a custom pattern. Properties
Yaml
Access logging for Undertow can be configured in a similar fashion, as shown in the following example: Properties
Yaml
Note that, in addition to enabling access logging and configuring its pattern, recording request start times has also been enabled. This is required when including the response time ( Finally, access logging for Jetty can also be configured as follows: Properties
Yaml
By default, logs are redirected to 17.3.12. Running Behind a Front-end Proxy ServerIf your application is running behind a proxy, a load-balancer or in the cloud, the request information (like the host, port, scheme…) might change along the way. Your application may be running on RFC7239 "Forwarded Headers" defines the If the proxy adds the commonly used If this is not enough, Spring Framework provides a ForwardedHeaderFilter. You can
register it as a servlet filter in your application by setting
Customize Tomcat’s Proxy ConfigurationIf you use Tomcat, you can additionally configure the names of the headers used to carry “forwarded” information, as shown in the following example: Properties
Yaml
Tomcat is also configured with a regular expression that matches internal proxies that are to be trusted. See the Properties
Yaml
You can take complete control of the configuration of Tomcat’s 17.3.13. Enable Multiple Connectors with TomcatYou can add an Java
Kotlin
17.3.14. Use Tomcat’s LegacyCookieProcessorBy default, the embedded Tomcat used by Spring Boot does not support "Version 0" of the Cookie format, so you may see the following error: java.lang.IllegalArgumentException: An invalid character [32] was present in the Cookie value If at all possible, you should consider updating your code to only store values compliant with later Cookie specifications. If, however, you cannot change the way that cookies are written, you can instead configure Tomcat to use a Java
Kotlin
17.3.15. Enable Tomcat’s MBean RegistryEmbedded Tomcat’s MBean registry is disabled by default. This minimizes Tomcat’s memory footprint. If you want to use Tomcat’s MBeans, for
example so that they can be used by Micrometer to expose metrics, you must use the Properties
Yaml
17.3.16. Enable Multiple Listeners with UndertowAdd an Java
Kotlin
17.3.17. Create WebSocket Endpoints Using @ServerEndpointIf you want to use Java
Kotlin
The bean shown in the preceding example registers any 17.4. Spring MVCSpring Boot has a number of starters that include Spring MVC. Note that some starters include a dependency on Spring MVC rather than include it directly. This section answers common questions about Spring MVC and Spring Boot. 17.4.1. Write a JSON REST ServiceAny Spring Java
Kotlin
As long as 17.4.2. Write an XML REST ServiceIf you have the Jackson XML extension (
If Jackson’s XML extension is not available and JAXB is available, XML can be rendered with the additional requirement of having Java
Kotlin
JAXB is only available out of the box with Java 8. If you use a more recent Java generation, add the following dependency to your project:
17.4.3. Customize the Jackson ObjectMapperSpring MVC (client and server side) uses The
Spring Boot also has some features to make it easier to customize this behavior. You can configure the
For example, to enable pretty print, set This environment-based configuration is applied to the auto-configured The context’s Any beans of type If you want to replace the default If you provide any 17.4.4. Customize the @ResponseBody RenderingSpring
uses As in normal MVC usage, any 17.4.5. Handling Multipart File UploadsSpring Boot embraces the servlet 3 The multipart support is helpful when you want to receive
multipart encoded file data as a
17.4.6. Switch Off the Spring MVC DispatcherServletBy default, all content is served from the root of your application ( Properties
Yaml
If you have additional servlets you can declare a Configuring the 17.4.7. Switch off the Default MVC ConfigurationThe easiest way to take complete control over MVC configuration is to provide your own 17.4.8. Customize ViewResolversA
For more detail, see the following sections:
17.5. Jersey17.5.1. Secure Jersey endpoints with Spring SecuritySpring Security can be used to secure a Jersey-based web application in much the same way as it can be used to secure a Spring MVC-based web application. However, if you want to use Spring Security’s method-level security with Jersey, you must configure Jersey to use The Java
Kotlin
17.5.2. Use Jersey Alongside Another Web FrameworkTo use Jersey alongside another web framework, such as Spring MVC, it should be configured so that it will allow the other framework to handle requests that it cannot handle. First, configure Jersey to use a filter rather than a servlet by configuring the Java
Kotlin
17.6. HTTP ClientsSpring Boot offers a number of starters that work with HTTP clients. This section answers questions related to using them. 17.6.1. Configure RestTemplate to Use a ProxyAs described in RestTemplate Customization, you
can use a The exact details of the proxy configuration depend on the underlying client request factory that is being used. 17.6.2. Configure the TcpClient used by a Reactor Netty-based WebClientWhen Reactor Netty is on the classpath a Reactor Netty-based Java
Kotlin
17.7. LoggingSpring Boot has no mandatory logging dependency, except for the Commons Logging API, which is typically provided by Spring Framework’s
Spring Boot has a If the only change you need to make to logging is to set the levels of various
loggers, you can do so in Properties
Yaml
You can also set the location of a file to which to write the log (in addition to the console) by using To configure the more fine-grained settings of a logging system, you need to use the native configuration format supported by the 17.7.1. Configure Logback for LoggingIf you need to apply customizations to logback beyond those that can be achieved with Spring Boot provides a number of logback configurations that be The following files are provided under
In addition, a legacy A typical custom
Your logback configuration file can also make use of System properties that the
Spring Boot also provides some nice ANSI color terminal output on a console (but not in a log file) by using a custom Logback converter. See the If Groovy is on the classpath, you should be able to configure Logback with
Configure Logback for File-only OutputIf you want to disable console logging and write output only to a file, you need a custom
You also need to add Properties
Yaml
17.7.2. Configure Log4j for LoggingSpring Boot supports Log4j 2 for logging
configuration if it is on the classpath. If you use the starters for assembling dependencies, you have to exclude Logback and then include log4j 2 instead. If you do not use the starters, you need to provide (at least) The recommended path is through the starters, even though it requires some jiggling. The following example shows how to set up the starters in Maven:
Gradle provides a few different ways to set up the starters. One way is to use a module replacement. To do so, declare a dependency on the Log4j 2 starter and tell Gradle that any occurrences of the default logging starter should be replaced by the Log4j 2 starter, as shown in the following example:
Use YAML or JSON to Configure Log4j 2In addition to its default XML configuration format, Log4j 2 also supports YAML and JSON configuration files. To configure Log4j 2 to use an alternative configuration file format, add the appropriate dependencies to the classpath and name your configuration files to match your chosen file format, as shown in the following example:
Use Composite Configuration to Configure Log4j 2Log4j 2 has support for combining multiple configuration files into a single composite configuration. To use this support in Spring Boot, configure 17.8. Data AccessSpring Boot includes a number of starters for working with data sources. This section answers questions related to doing so. 17.8.1. Configure a Custom DataSourceTo configure your own The following example shows how to define a data source in a bean: Java
Kotlin
The following example shows how to define a data source by setting properties: Properties
Yaml
Assuming that Spring Boot also provides a utility builder class, called The following example shows how to create a data source by using a Java
Kotlin
To run an app with that The following example shows how to define a JDBC data source by setting properties: Properties
Yaml
However, there is a catch. Because the actual type of the connection pool is not exposed, no keys are generated in the metadata for your custom Properties
Yaml
You can fix that by forcing the connection pool to use and return a dedicated implementation rather than The
following example shows how create a Java
Kotlin
You can even go further by leveraging what Java
Kotlin
This setup puts you in sync with what Spring Boot does for you by default, except that a dedicated connection pool is chosen
(in code) and its settings are exposed in the Properties
Yaml
17.8.2. Configure Two DataSourcesIf you need to configure multiple data sources, you can apply the same tricks that are described in the previous section. You must, however, mark one of the If you create your own Java
Kotlin
Both data sources are also bound for advanced customizations. For instance, you could configure them as follows: Properties
Yaml
You can apply the same concept to the secondary Java
Kotlin
The preceding example configures two data sources on custom namespaces with the
same logic as Spring Boot would use in auto-configuration. Note that each 17.8.3. Use Spring Data RepositoriesSpring Data can create implementations of For many applications, all you need is to put the right Spring Data dependencies on your classpath. There is a Spring Boot tries to guess the location of your 17.8.4. Separate @Entity Definitions from Spring ConfigurationSpring Boot tries to guess the location of
your Java
Kotlin
17.8.5. Configure JPA PropertiesSpring Data JPA already provides some vendor-independent configuration options (such as those for SQL logging), and Spring Boot exposes those options and a few more for Hibernate as external configuration properties. Some of them are automatically detected according to the context so you should not have to set them. The The dialect to use is detected by the JPA provider. If you prefer to set the dialect yourself, set the The most common options to set are shown in the following example: Properties
Yaml
In addition, all properties in
17.8.6. Configure Hibernate Naming StrategyHibernate uses two different naming strategies to map names from the
object model to the corresponding database names. The fully qualified class name of the physical and the implicit strategy implementations can be configured by setting the By default, Spring Boot configures the physical naming strategy with Java
Kotlin
If you prefer to use Hibernate 5’s default instead, set the following property:
Alternatively, you can configure the following bean: Java
Kotlin
17.8.7. Configure Hibernate Second-Level CachingHibernate second-level cache can be configured for a range of cache providers. Rather than configuring Hibernate to lookup the cache provider again, it is better to provide the one that is available in the context whenever possible. To do this with JCache, first make sure that Java
Kotlin
This customizer will configure Hibernate to use the same 17.8.8. Use Dependency Injection in Hibernate ComponentsBy default, Spring Boot registers a You can disable or tune this behavior
by registering a 17.8.9. Use a Custom EntityManagerFactoryTo take full control of the configuration of the 17.8.10. Using Multiple EntityManagerFactoriesIf you need to use JPA against multiple data sources, you likely need one Java
Kotlin
The example above creates an
You should provide a similar configuration for any additional data sources for which you need JPA access. To complete the picture, you need to configure a If you use Spring Data, you need to configure Java
Kotlin
Java
Kotlin
17.8.11. Use a Traditional persistence.xml FileSpring Boot will not search for or use a 17.8.12. Use Spring Data JPA and Mongo RepositoriesSpring Data JPA and Spring Data Mongo can both automatically create There are also flags ( The same obstacle and the same features exist for other auto-configured Spring Data repository types (Elasticsearch, Solr, and others). To work with them, change the names of the annotations and flags accordingly. 17.8.13. Customize Spring Data’s Web SupportSpring Data provides web support that simplifies the use of Spring Data repositories in a web application. Spring Boot provides properties in the 17.8.14. Expose Spring Data Repositories as REST EndpointSpring Data REST can expose the Spring Boot exposes a set of useful
properties (from the
17.8.15. Configure a Component that is Used by JPAIf you want to configure a component that JPA uses, then you need to ensure that the component is initialized before JPA. When the component is auto-configured, Spring Boot takes care of this for you. For example, when Flyway is auto-configured, Hibernate is configured to depend upon Flyway so that Flyway has a chance to initialize the database before Hibernate tries to use it. If you are configuring a component yourself, you can use an Java
Kotlin
17.8.16. Configure jOOQ with Two DataSourcesIf you need to use jOOQ with multiple data sources, you should create your own
17.9. Database InitializationAn SQL database can be initialized in different ways depending on what your stack is. Of course, you can also do it manually, provided the database is a separate process. It is recommended to use a single mechanism for schema generation. 17.9.1. Initialize a Database Using JPAJPA has features for DDL generation, and these can be set up to run on startup against the database. This is controlled through two external properties:
17.9.2. Initialize a Database Using HibernateYou can set
In addition, a file named 17.9.3. Initialize a Database Using Basic SQL ScriptsSpring Boot can automatically create the schema (DDL scripts) of your JDBC Script-based If you are using a Higher-level Database Migration Tool, like Flyway or Liquibase, you should use them alone to create and initialize the schema. Using the basic 17.9.4. Initialize a Spring Batch DatabaseIf you use Spring Batch, it comes pre-packaged with SQL initialization scripts for most popular database platforms. Spring Boot can detect your database type and execute those scripts on startup. If you use an embedded database, this happens by default. You can also enable it for any database type, as shown in the following example: Properties
Yaml
You can also switch off the initialization explicitly by setting 17.9.5. Use a Higher-level Database Migration ToolSpring Boot supports two higher-level migration tools: Flyway and Liquibase. Execute Flyway Database Migrations on StartupTo automatically run Flyway database migrations on
startup, add the Typically, migrations are scripts in the form Properties
Yaml
You can also add a special Properties
Yaml
Rather than using Migrations can also be written in Java. Flyway will be auto-configured with any beans that implement
Spring Boot calls Flyway supports SQL and Java callbacks. To use SQL-based callbacks, place
the callback scripts in the By default, Flyway autowires the ( You can also use
Flyway to provide data for specific scenarios. For example, you can place test-specific migrations in Properties
Yaml
With that setup,
migrations in Execute Liquibase Database Migrations on StartupTo automatically run Liquibase database migrations on startup, add the
By default, the master change log is read from By default, Liquibase autowires the ( See
17.9.6. Depend Upon an Initialized DatabaseDatabase initialization is performed while the application is starting up as part of application context refresh. To allow an initialized database to be accessed during startup, beans that act as database initializers and beans that require that database to have been initialized are detected automatically. Beans whose initialization depends upon the database having been initialized are configured to depend upon those that initialize it. If, during startup, your application tries to access the database and it has not been initialized, you can configure additional detection of beans that initialize the database and require the database to have been initialized. Detect a Database InitializerSpring Boot will automatically detect beans of the following types that initialize an SQL database:
If you are using a third-party
starter for a database initialization library, it may provide a detector such that beans of other types are also detected automatically. To have other beans be detected, register an implementation of Detect a Bean That Depends On Database InitializationSpring Boot will automatically detect beans of the following types that depends upon database initialization:
If you are using a third-party starter data access library, it may provide a detector such that beans of other types are also detected automatically. To have other beans be detected, register an implementation of 17.10. MessagingSpring Boot offers a number of starters to support messaging. This section answers questions that arise from using messaging with Spring Boot. 17.10.1. Disable Transacted JMS SessionIf your JMS broker does not support transacted sessions, you have to disable the support of transactions altogether. If you create your own Java
Kotlin
The preceding example overrides the default factory, and it should be applied to any other factory that your application defines, if any. 17.11. Batch ApplicationsA number of questions often arise when people use Spring Batch from within a Spring Boot application. This section addresses those questions. 17.11.1. Specifying a Batch Data SourceBy default, batch applications require a 17.11.2. Running Spring Batch Jobs on StartupSpring Batch auto-configuration is enabled by adding By default, it executes all 17.11.3. Running From the Command LineSpring Boot converts any command line argument starting with
If you specify a property of the
This provides only one argument to the batch job: 17.11.4. Storing the Job RepositorySpring Batch requires a data store for the 17.12. ActuatorSpring Boot includes the Spring Boot Actuator. This section answers questions that often arise from its use. 17.12.1. Change the HTTP Port or Address of the Actuator EndpointsIn a standalone application, the Actuator HTTP port defaults to the same as the main HTTP port. To make the application listen on a different port, set the external property: 17.12.2. Customize the ‘whitelabel’ Error PageSpring Boot installs a ‘whitelabel’ error page that you see in a browser client if you encounter a server error (machine clients consuming JSON and other media types should see a sensible response with the right error code).
Overriding the error page with your own depends on the templating technology that you use. For example, if you use Thymeleaf, you can add an See also the section on “Error Handling” for details of how to register handlers in the servlet container. 17.12.3. Sanitize Sensitive ValuesInformation returned by the Furthermore, Spring Boot sanitizes the sensitive portion of URI-like values for keys with one of the following endings:
The sensitive portion of the URI is identified using the format Customizing SanitizationSanitization can be customized in two different ways. The default patterns used by the To take more control over the
sanitization, define a 17.12.4. Map Health Indicators to Micrometer MetricsSpring Boot health indicators return a The following example shows one way to write such an exporter: Java
Kotlin
17.13. SecurityThis section addresses questions about security when working with Spring Boot, including questions that arise from using Spring Security with Spring Boot. 17.13.1. Switch off the Spring Boot Security ConfigurationIf you define a 17.13.2. Change the UserDetailsService and Add User AccountsIf you provide a The easiest way to add user accounts is to provide your own 17.13.3. Enable HTTPS When Running behind a Proxy ServerEnsuring that all your
main endpoints are only available over HTTPS is an important chore for any application. If you use Tomcat as a servlet container, then Spring Boot adds Tomcat’s own Properties
Yaml
(The presence of either of those properties switches on the valve. Alternatively, you can add the To configure Spring Security to
require a secure channel for all (or some) requests, consider adding your own Java
Kotlin
17.14. Hot SwappingSpring Boot supports hot swapping. This section answers questions about how it works. 17.14.1. Reload Static ContentThere are several options for hot reloading. The recommended approach is to use Alternatively, running in an IDE (especially with debugging on) is a good way to do development (all modern IDEs allow reloading of static resources and usually also allow hot-swapping of Java class changes). Finally, the Maven and Gradle plugins can be configured (see the 17.14.2. Reload Templates without Restarting the ContainerMost of the templating technologies supported by Spring Boot include a configuration option to disable caching (described later in this document). If you use the Thymeleaf TemplatesIf you use Thymeleaf, set FreeMarker TemplatesIf you use FreeMarker, set Groovy TemplatesIf you use Groovy templates, set 17.14.3. Fast Application RestartsThe 17.14.4. Reload Java Classes without Restarting the ContainerMany modern IDEs (Eclipse, IDEA, and others) support hot swapping of bytecode. Consequently, if you make a change that does not affect class or method signatures, it should reload cleanly with no side effects. 17.15. TestingSpring Boot includes a number of testing utilities and support classes as well as a dedicated starter that provides common test dependencies. This section answers common questions about testing. 17.15.1. Testing With Spring SecuritySpring Security provides support for running tests as a specific user. For example, the test in the snippet below will run with an authenticated user that has the Java
Kotlin
Spring
Security provides comprehensive integration with Spring MVC Test and this can also be used when testing controllers using the For additional details on Spring Security’s testing support, see Spring Security’s reference documentation. 17.15.2. Use Testcontainers for Integration TestingThe Testcontainers library provides a way to manage services running inside Docker containers. It integrates with JUnit, allowing you to write a test class that can start up a container before any of the tests run. Testcontainers is especially useful for writing integration tests that talk to a real backend service such as MySQL, MongoDB, Cassandra and others. Testcontainers can be used in a Spring Boot test as follows: Java
Kotlin
This will start up a docker container running Neo4j (if Docker is running locally) before any of the tests are run. In most cases, you will need to configure the application using details from the running container, such as container IP or port. This can be done with a static Java
Kotlin
The above configuration allows Neo4j-related beans in the application to communicate with Neo4j running inside the Testcontainers-managed Docker container. 17.15.3. Structure |
The commit time in git.properties is expected to match the following format: yyyy-MM-dd’T’HH:mm:ssZ . This is the default format for both plugins listed above. Using this format lets the time be parsed into a Date and its format, when serialized to JSON, to be controlled by Jackson’s date serialization configuration settings.
|
17.16.3. Customize Dependency Versions
The spring-boot-dependencies
POM manages the versions of common dependencies. The Spring Boot plugins for Maven and Gradle allow these managed dependency versions to be customized using build properties.
Each Spring Boot release is designed and tested against this specific set of third-party dependencies. Overriding versions may cause compatibility issues. |
To override dependency versions with Maven, see this section of the Maven plugin’s documentation.
To override dependency versions in Gradle, see this section of the Gradle plugin’s documentation.
17.16.4. Create an Executable JAR with Maven
The spring-boot-maven-plugin
can be used to create an executable “fat” JAR. If you use the spring-boot-starter-parent
POM, you can declare the plugin and your jars are repackaged as follows:
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
If you do not use the
parent POM, you can still use the plugin. However, you must additionally add an <executions>
section, as follows:
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<version>{spring-boot-version}</version>
<executions>
<execution>
<goals>
<goal>repackage</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
17.16.5. Use a Spring Boot Application as a Dependency
Like a war file, a Spring Boot application is not intended to be used as a dependency. If your application contains classes that you want to share with other projects, the recommended approach is to move that code into a separate module. The separate module can then be depended upon by your application and other projects.
If you cannot rearrange your code as recommended above, Spring Boot’s Maven and Gradle plugins must be configured to produce a separate artifact that is suitable for use as a dependency. The executable archive cannot be used as a dependency as
the executable jar format packages application classes in BOOT-INF/classes
. This means that they cannot be found when the executable jar is used as a dependency.
To produce the two artifacts, one that can be used as a dependency and one that is executable, a classifier must be specified. This classifier is applied to the name of the executable archive, leaving the default archive for use as a dependency.
To configure a classifier of exec
in Maven, you can use the following configuration:
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<classifier>exec</classifier>
</configuration>
</plugin>
</plugins>
</build>
Most nested libraries in an executable jar do not need to be unpacked in order to run. However, certain libraries can have problems. For example, JRuby includes its own nested jar support, which assumes that the jruby-complete.jar
is always directly available as a file in its own right.
To deal with any
problematic libraries, you can flag that specific nested jars should be automatically unpacked when the executable jar first runs. Such nested jars are written beneath the temporary directory identified by the java.io.tmpdir
system property.
Care should be taken to ensure that your operating system is configured so that it will not delete the jars that have been unpacked to the temporary directory while the application is still running. |
For example, to indicate that JRuby should be flagged for unpacking by using the Maven Plugin, you would add the following configuration:
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>
<requiresUnpack>
<dependency>
<groupId>org.jruby</groupId>
<artifactId>jruby-complete</artifactId>
</dependency>
</requiresUnpack>
</configuration>
</plugin>
</plugins>
</build>
17.16.7. Create a Non-executable JAR with Exclusions
Often, if you have an executable and a non-executable
jar as two separate build products, the executable version has additional configuration files that are not needed in a library jar. For example, the application.yml
configuration file might be excluded from the non-executable JAR.
In Maven, the executable jar must be the main artifact and you can add a classified jar for the library, as follows:
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
<plugin>
<artifactId>maven-jar-plugin</artifactId>
<executions>
<execution>
<id>lib</id>
<phase>package</phase>
<goals>
<goal>jar</goal>
</goals>
<configuration>
<classifier>lib</classifier>
<excludes>
<exclude>application.yml</exclude>
</excludes>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
17.16.8. Remote Debug a Spring Boot Application Started with Maven
To attach a remote debugger to a Spring Boot application that was started with Maven, you can use the jvmArguments
property of the maven plugin.
17.16.9. Build an Executable Archive From Ant without Using spring-boot-antlib
To build with Ant, you need to grab dependencies, compile, and then create a jar or war archive. To make it executable, you can either use the
spring-boot-antlib
module or you can follow these instructions:
If you are building a jar, package the application’s classes and resources in a nested
BOOT-INF/classes
directory. If you are building a war, package the application’s classes in a nestedWEB-INF/classes
directory as usual.Add the runtime dependencies in a nested
BOOT-INF/lib
directory for a jar orWEB-INF/lib
for a war. Remember not to compress the entries in the archive.Add the
provided
(embedded container) dependencies in a nestedBOOT-INF/lib
directory for a jar orWEB-INF/lib-provided
for a war. Remember not to compress the entries in the archive.Add the
spring-boot-loader
classes at the root of the archive (so that theMain-Class
is available).Use the appropriate launcher (such as
JarLauncher
for a jar file) as aMain-Class
attribute in the manifest and specify the other properties it needs as manifest entries — principally, by setting aStart-Class
property.
The following example shows how to build an executable archive with Ant:
<target name="build" depends="compile">
<jar destfile="target/${ant.project.name}-${spring-boot.version}.jar" compress="false">
<mappedresources>
<fileset dir="target/classes" />
<globmapper from="*" to="BOOT-INF/classes/*"/>
</mappedresources>
<mappedresources>
<fileset dir="src/main/resources" erroronmissingdir="false"/>
<globmapper from="*" to="BOOT-INF/classes/*"/>
</mappedresources>
<mappedresources>
<fileset dir="${lib.dir}/runtime" />
<globmapper from="*" to="BOOT-INF/lib/*"/>
</mappedresources>
<zipfileset src="${lib.dir}/loader/spring-boot-loader-jar-${spring-boot.version}.jar" />
<manifest>
<attribute name="Main-Class" value="org.springframework.boot.loader.JarLauncher" />
<attribute name="Start-Class" value="${start-class}" />
</manifest>
</jar>
</target>
17.17. Traditional Deployment
Spring Boot supports traditional deployment as well as more modern forms of deployment. This section answers common questions about traditional deployment.
17.17.1. Create a Deployable War File
Because Spring WebFlux does not strictly depend on the servlet API and applications are deployed by default on an embedded Reactor Netty server, War deployment is not supported for WebFlux applications. |
The first step in producing a deployable war file is to provide a SpringBootServletInitializer
subclass and override its configure
method. Doing so makes use of Spring Framework’s servlet 3.0 support and lets you configure your application when it is launched by the servlet container. Typically, you should update your application’s main class to extend SpringBootServletInitializer
, as shown in the following example:
Java
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.builder.SpringApplicationBuilder;
import org.springframework.boot.web.servlet.support.SpringBootServletInitializer;
@SpringBootApplication
public class MyApplication extends SpringBootServletInitializer {
@Override
protected SpringApplicationBuilder configure(SpringApplicationBuilder application) {
return application.sources(MyApplication.class);
}
public static void main(String[] args) {
SpringApplication.run(MyApplication.class, args);
}
}
Kotlin
import org.springframework.boot.autoconfigure.SpringBootApplication
import org.springframework.boot.builder.SpringApplicationBuilder
import org.springframework.boot.runApplication
import org.springframework.boot.web.servlet.support.SpringBootServletInitializer
@SpringBootApplication
class MyApplication : SpringBootServletInitializer() {
override fun configure(application: SpringApplicationBuilder): SpringApplicationBuilder {
return application.sources(MyApplication::class.java)
}
}
fun main(args: Array<String>) {
runApplication<MyApplication>(*args)
}
The
next step is to update your build configuration such that your project produces a war file rather than a jar file. If you use Maven and spring-boot-starter-parent
(which configures Maven’s war plugin for you), all you need to do is to modify pom.xml
to change the packaging to war, as follows:
<packaging>war</packaging>
If you use Gradle, you need to modify build.gradle
to apply the war plugin to the project, as follows:
The final step in the process is to ensure that the embedded servlet container does not interfere with the servlet container to which the war file is deployed. To do so, you need to mark the embedded servlet container dependency as being provided.
If you use Maven, the following example marks the servlet container (Tomcat, in this case) as being provided:
<dependencies>
<!-- ... -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-tomcat</artifactId>
<scope>provided</scope>
</dependency>
<!-- ... -->
</dependencies>
If you use Gradle, the following example marks the servlet container (Tomcat, in this case) as being provided:
dependencies {
// ...
providedRuntime 'org.springframework.boot:spring-boot-starter-tomcat'
// ...
}
providedRuntime is preferred to Gradle’s compileOnly configuration. Among other limitations, compileOnly dependencies are not on the test classpath, so any web-based integration tests fail.
|
If you use the Spring Boot build tools, marking the embedded servlet container dependency as provided produces an executable war file with the provided dependencies packaged in a lib-provided
directory. This means that, in addition to being deployable to a servlet container, you can also run your application by using java -jar
on the command line.
17.17.2. Convert an Existing Application to Spring Boot
To convert an existing non-web Spring application to a Spring Boot application, replace the code that creates your ApplicationContext
and replace it with calls to SpringApplication
or SpringApplicationBuilder
. Spring MVC web applications are
generally amenable to first creating a deployable war application and then migrating it later to an executable war or jar. See the Getting Started Guide on Converting a jar to a war.
To create a deployable war by extending SpringBootServletInitializer
(for example, in a class called Application
) and adding the Spring Boot @SpringBootApplication
annotation, use code similar to that shown in the following example:
Java
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.builder.SpringApplicationBuilder;
import org.springframework.boot.web.servlet.support.SpringBootServletInitializer;
@SpringBootApplication
public class MyApplication extends SpringBootServletInitializer {
@Override
protected SpringApplicationBuilder configure(SpringApplicationBuilder application) {
// Customize the application or call application.sources(...) to add sources
// Since our example is itself a @Configuration class (via @SpringBootApplication)
// we actually do not need to override this method.
return application;
}
}
Kotlin
import org.springframework.boot.autoconfigure.SpringBootApplication
import org.springframework.boot.builder.SpringApplicationBuilder
import org.springframework.boot.runApplication
import org.springframework.boot.web.servlet.support.SpringBootServletInitializer
@SpringBootApplication
class MyApplication : SpringBootServletInitializer() {
override fun configure(application: SpringApplicationBuilder): SpringApplicationBuilder {
// Customize the application or call application.sources(...) to add sources
// Since our example is itself a @Configuration class (via @SpringBootApplication)
// we actually do not need to override this method.
return application
}
}
Remember that, whatever you put in the sources
is merely a Spring ApplicationContext
. Normally, anything that already works should work here. There might be some beans you can remove later and let Spring Boot provide its own defaults for them, but it should be possible to get something working before you need to do that.
Static resources can be moved to /public
(or /static
or /resources
or /META-INF/resources
) in the classpath root. The same applies to
messages.properties
(which Spring Boot automatically detects in the root of the classpath).
Vanilla usage of Spring DispatcherServlet
and Spring Security should require no further changes. If you have other features in your application (for instance, using other servlets or filters), you may need to add some configuration to your Application
context, by replacing those elements from the web.xml
, as follows:
A
@Bean
of typeServlet
orServletRegistrationBean
installs that bean in the container as if it were a<servlet/>
and<servlet-mapping/>
inweb.xml
.A
@Bean
of typeFilter
orFilterRegistrationBean
behaves similarly (as a<filter/>
and<filter-mapping/>
).An
ApplicationContext
in an XML file can be added through an@ImportResource
in yourApplication
. Alternatively, cases where annotation configuration is heavily used already can be recreated in a few lines as@Bean
definitions.
Once the war file is working, you can make it executable by adding a main
method
to your Application
, as shown in the following example:
Java
public static void main(String[] args) {
SpringApplication.run(MyApplication.class, args);
}
Kotlin
fun main(args: Array<String>) {
runApplication<MyApplication>(*args)
}
If you intend to start your application as a war or as an executable application, you need to share the customizations of the builder in a method that is both available to the Java
Kotlin
|
Applications can fall into more than one category:
Servlet 3.0+ applications with no
web.xml
.Applications with a
web.xml
.Applications with a context hierarchy.
Applications without a context hierarchy.
All of these should be amenable to translation, but each might require slightly different techniques.
Servlet 3.0+ applications might translate pretty easily if they already
use the Spring Servlet 3.0+ initializer support classes. Normally, all the code from an existing WebApplicationInitializer
can be moved into a SpringBootServletInitializer
. If your existing application has more than one ApplicationContext
(for example, if it uses AbstractDispatcherServletInitializer
) then you might be able to combine all your context sources into a single SpringApplication
. The main complication you might encounter is if combining does not work and you need to maintain the context hierarchy. See the
entry on building a hierarchy for examples. An existing parent context that contains web-specific features usually needs to be broken up so that all the ServletContextAware
components are in the child context.
Applications that are not already Spring applications might be convertible to Spring Boot applications, and the previously mentioned guidance may help.
However, you may yet encounter problems. In that case, we suggest asking questions on Stack Overflow with a tag of spring-boot
.
17.17.3. Deploying a WAR to WebLogic
To deploy a Spring Boot application to
WebLogic, you must ensure that your servlet initializer directly implements WebApplicationInitializer
(even if you extend from a base class that already implements it).
A typical initializer for WebLogic should resemble the following example:
Java
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.web.servlet.support.SpringBootServletInitializer;
import org.springframework.web.WebApplicationInitializer;
@SpringBootApplication
public class MyApplication extends SpringBootServletInitializer implements WebApplicationInitializer {
}
Kotlin
import org.springframework.boot.autoconfigure.SpringBootApplication
import org.springframework.boot.web.servlet.support.SpringBootServletInitializer
import org.springframework.web.WebApplicationInitializer
@SpringBootApplication
class MyApplication : SpringBootServletInitializer(), WebApplicationInitializer
If you use Logback, you also need to tell WebLogic to prefer the packaged version rather than the version that was pre-installed with the server. You can do so by
adding a WEB-INF/weblogic.xml
file with the following contents:
<?xml version="1.0" encoding="UTF-8"?>
<wls:weblogic-web-app
xmlns:wls="http://xmlns.oracle.com/weblogic/weblogic-web-app"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://java.sun.com/xml/ns/javaee
https://java.sun.com/xml/ns/javaee/ejb-jar_3_0.xsd
http://xmlns.oracle.com/weblogic/weblogic-web-app
https://xmlns.oracle.com/weblogic/weblogic-web-app/1.4/weblogic-web-app.xsd">
<wls:container-descriptor>
<wls:prefer-application-packages>
<wls:package-name>org.slf4j</wls:package-name>
</wls:prefer-application-packages>
</wls:container-descriptor>
</wls:weblogic-web-app>
Appendices
Appendix A: Common Application Properties
Various properties can
be specified inside your application.properties
file, inside your application.yml
file, or as command line switches. This appendix provides a list of common Spring Boot properties and references to the underlying classes that consume them.
Property contributions can come from additional jar files on your classpath, so you should not consider this an exhaustive list. Also, you can define your own properties. |
.A.1. Core Properties
Name | Description | Default Value |
---|---|---|
| Enable debug logs. |
|
| Arbitrary properties to add to the info endpoint. | |
| Charset to use for console output. | |
| Charset to use for file output. | |
| Location of the logging configuration file. For instance, `classpath:logback.xml` for Logback. | |
| Conversion word used when logging exceptions. |
|
| Log file name (for instance, `myapp.log`). Names can be an exact location or relative to the current directory. | |
| Location of the log file. For instance, `/var/log`. | |
| Log groups to quickly change multiple loggers at the same time. For instance, `logging.group.db=org.hibernate,org.springframework.jdbc`. | |
| Log levels severity mapping. For instance, `logging.level.org.springframework=DEBUG`. | |
| Overriding configuration files used to create a composite configuration. | |
| Whether to clean the archive log files on startup. |
|
| Pattern for rolled-over log file names. |
|
| Maximum log file size. |
|
| Maximum number of archive log files to keep. |
|
| Total size of log backups to be kept. |
|
| Appender pattern for output to the console. Supported only with the default Logback setup. |
|
| Appender pattern for log date format. Supported only with the default Logback setup. |
|
| Appender pattern for output to a file. Supported only with the default Logback setup. |
|
| Appender pattern for log level. Supported only with the default Logback setup. |
|
| Register a shutdown hook for the logging system when it is initialized. Disabled automatically when deployed as a war file. |
|
| Add @EnableAspectJAutoProxy. |
|
| Whether subclass-based (CGLIB) proxies are to be created (true), as opposed to standard Java interface-based proxies (false). |
|
| Whether to enable admin features for the application. |
|
| JMX name of the application admin MBean. |
|
| Application name. | |
| Auto-configuration classes to exclude. | |
| Banner file encoding. |
|
| Bit depth to use for ANSI colors. Supported values are 4 (16 color) or 8 (256 color). |
|
| Height of the banner image in chars (default based on image height). | |
| Whether images should be inverted for dark terminal themes. |
|
| Banner image file location (jpg or png can also be used). |
|
| Left hand image margin in chars. |
|
| Pixel mode to use when rendering the image. |
|
| Width of the banner image in chars. |
|
| Banner text resource location. |
|
| Whether to skip search of BeanInfo classes. |
|
| Whether to log form data at DEBUG level, and headers at TRACE level. |
|
| Limit on the number of bytes that can be buffered whenever the input stream needs to be aggregated. This applies only to the auto-configured WebFlux server and WebClient instances. By default this is not set, in which case individual codec defaults apply. Most codecs are limited to 256K by default. | |
| Required cloud platform for the document to be included. | |
| Profile expressions that should match for the document to be included. | |
| Config file locations used in addition to the defaults. | |
| Import additional config data. | |
| Config file locations that replace the defaults. | |
| Config file name. |
|
| Whether to enable configuration data processing legacy mode. |
|
| File encoding. |
|
| Location of the generated build-info.properties file. |
|
| File encoding. |
|
| Location of the generated git.properties file. |
|
| JMX domain name. | |
| Expose management beans to the JMX domain. |
|
| MBeanServer bean name. |
|
| Whether unique runtime object names should be ensured. |
|
| Timeout for the shutdown of any phase (group of SmartLifecycle beans with the same 'phase' value). |
|
| Whether bean definition overriding, by registering a definition with the same name as an existing definition, is allowed. |
|
| Whether to allow circular references between beans and automatically try to resolve them. |
|
| Mode used to display the banner when the application runs. |
|
| Override the Cloud Platform auto-detection. | |
| Whether initialization should be performed lazily. |
|
| Whether to log information about the application when it starts. |
|
| Whether the application should have a shutdown hook registered. |
|
| Sources (class names, package names, or XML resource locations) to include in the ApplicationContext. | |
| Flag to explicitly request a specific type of web application. If not set, auto-detected based on the classpath. | |
| Expected character encoding the application must use. | |
| Whether to always apply the MessageFormat rules, parsing even messages without arguments. |
|
| Comma-separated list of basenames (essentially a fully-qualified classpath location), each following the ResourceBundle convention with relaxed support for slash based locations. If it doesn't contain a package qualifier (such as "org.mypackage"), it will be resolved from the classpath root. |
|
| Loaded resource bundle files cache duration. When not set, bundles are cached forever. If a duration suffix is not specified, seconds will be used. | |
| Message bundles encoding. |
|
| Whether to fall back to the system Locale if no files for a specific Locale have been found. if this is turned off, the only fallback will be the default file (e.g. "messages.properties" for basename "messages"). |
|
| Whether to use the message code as the default message instead of throwing a "NoSuchMessageException". Recommended during development only. |
|
| Configures the ANSI output. |
|
| Fails if ApplicationPidFileWriter is used but it cannot write the PID file. | |
| Location of the PID file to write (if ApplicationPidFileWriter is used). | |
| Comma-separated list of active profiles. Can be overridden by a command line switch. | |
| Name of the profile to enable if no profile is active. |
|
| Profile groups to define a logical name for a related group of profiles. | |
| Unconditionally activate the specified comma-separated list of profiles (or list of profiles if using YAML). | |
| Whether to automatically start the scheduler after initialization. |
|
| Prefixes for single-line comments in SQL initialization scripts. |
|
| Database schema initialization mode. |
|
| Platform to use in initialization scripts if the @@[email protected]@ placeholder is used. Auto-detected by default. | |
| Path to the SQL file to use to initialize the database schema. |
|
| Quartz job store type. |
|
| Whether configured jobs should overwrite existing job definitions. |
|
| Additional Quartz Scheduler properties. | |
| Name of the scheduler. |
|
| Delay after which the scheduler is started once initialization completes. Setting this property makes sense if no jobs should be run before the entire application has started up. |
|
| Whether to wait for running jobs to complete on shutdown. |
|
| Whether the Reactor Debug Agent should be enabled when reactor-tools is present. |
|
| Whether core threads are allowed to time out. This enables dynamic growing and shrinking of the pool. |
|
| Core number of threads. |
|
| Time limit for which threads may remain idle before being terminated. |
|
| Maximum allowed number of threads. If tasks are filling up the queue, the pool can expand up to that size to accommodate the load. Ignored if the queue is unbounded. | |
| Queue capacity. An unbounded capacity does not increase the pool and therefore ignores the "max-size" property. | |
| Whether the executor should wait for scheduled tasks to complete on shutdown. |
|
| Maximum time the executor should wait for remaining tasks to complete. | |
| Prefix to use for the names of newly created threads. |
|
| Maximum allowed number of threads. |
|
| Whether the executor should wait for scheduled tasks to complete on shutdown. |
|
| Maximum time the executor should wait for remaining tasks to complete. | |
| Prefix to use for the names of newly created threads. |
|
| Enable trace logs. |
|
.A.5. Data Properties
Name | Description | Default Value |
---|---|---|
| Connection string used to locate the Couchbase cluster. | |
| Length of time an HTTP connection may remain idle before it is closed and removed from the pool. |
|
| Maximum number of sockets per node. |
|
| Minimum number of sockets per node. |
|
| Whether to enable SSL support. Enabled automatically if a "keyStore" is provided unless specified otherwise. | |
| Path to the JVM key store that holds the certificates. | |
| Password used to access the key store. | |
| Timeout for the analytics service. |
|
| Bucket connect timeout. |
|
| Bucket disconnect timeout. |
|
| Timeout for operations on a specific key-value. |
|
| Timeout for operations on a specific key-value with a durability level. |
|
| Timeout for the management operations. |
|
| N1QL query operations timeout. |
|
| Timeout for the search service. |
|
| Regular and geospatial view operations timeout. |
|
| Cluster password. | |
| Cluster username. | |
| Whether to enable the PersistenceExceptionTranslationPostProcessor. |
|
| Compression supported by the Cassandra binary protocol. |
|
| Location of the configuration file to use. | |
| Timeout to use when establishing driver connections. |
|
| Timeout to use for internal queries that run as part of the initialization process, just after a connection is opened. |
|
| Cluster node addresses in the form 'host:port', or a simple 'host' to use the configured port. |
|
| Timeout to use for control queries. |
|
| Keyspace name to use. | |
| Datacenter that is considered "local". Contact points should be from this datacenter. | |
| Login password of the server. | |
| Heartbeat interval after which a message is sent on an idle connection to make sure it's still alive. |
|
| Idle timeout before an idle connection is removed. |
|
| Port to use if a contact point does not specify one. |
|
| Type of Cassandra repositories to enable. |
|
| Queries consistency level. | |
| How many rows will be retrieved simultaneously in a single network round-trip. |
|
| Queries serial consistency level. | |
| How often the throttler attempts to dequeue requests. Set this high enough that each attempt will process multiple entries in the queue, but not delay requests too much. | |
| Maximum number of requests that are allowed to execute in parallel. | |
| Maximum number of requests that can be enqueued when the throttling threshold is exceeded. | |
| Maximum allowed request rate. | |
| Request throttling type. |
|
| How long the driver waits for a request to complete. |
|
| Schema action to take at startup. |
|
| Name of the Cassandra session. | |
| Enable SSL support. |
|
| Login user of the server. | |
| Automatically create views and indexes. Use the meta-data provided by "@ViewIndexed", "@N1qlPrimaryIndexed" and "@N1qlSecondaryIndexed". |
|
| Name of the bucket to connect to. | |
| Fully qualified name of the FieldNamingStrategy to use. | |
| Type of Couchbase repositories to enable. |
|
| Name of the scope used for all collection access. | |
| Name of the field that stores the type information for complex types when using "MappingCouchbaseConverter". |
|
| Whether to enable Elasticsearch repositories. |
|
| Whether to enable JDBC repositories. |
|
| Bootstrap mode for JPA repositories. |
|
| Whether to enable JPA repositories. |
|
| Whether to enable LDAP repositories. |
|
| Authentication database name. | |
| Whether to enable auto-index creation. | |
| Database name. | |
| Fully qualified name of the FieldNamingStrategy to use. | |
| GridFS bucket name. | |
| GridFS database name. | |
| Mongo server host. Cannot be set with URI. | |
| Login password of the mongo server. Cannot be set with URI. | |
| Mongo server port. Cannot be set with URI. | |
| Required replica set name for the cluster. Cannot be set with URI. | |
| Type of Mongo repositories to enable. |
|
| Mongo database URI. Overrides host, port, username, password, and database. |
|
| Login user of the mongo server. Cannot be set with URI. | |
| Representation to use when converting a UUID to a BSON binary value. |
|
| Database name to use. By default, the server decides the default database to use. | |
| Type of Neo4j repositories to enable. |
|
| Whether to enable R2DBC repositories. |
|
| Whether to enable Redis repositories. |
|
| Base path to be used by Spring Data REST to expose repository resources. | |
| Content type to use as a default when none is specified. | |
| Default size of pages. | |
| Strategy to use to determine which repositories get exposed. |
|
| Whether to enable enum value translation through the Spring Data REST default resource bundle. | |
| Name of the URL query string parameter that indicates how many results to return at once. | |
| Maximum size of pages. | |
| Name of the URL query string parameter that indicates what page to return. | |
| Whether to return a response body after creating an entity. | |
| Whether to return a response body after updating an entity. | |
| Name of the URL query string parameter that indicates what direction to sort results. | |
| Solr host. Ignored if "zk-host" is set. |
|
| ZooKeeper host address in the form HOST:PORT. | |
| Default page size. |
|
| Maximum page size to be accepted. |
|
| Whether to expose and assume 1-based page number indexes. Defaults to "false", meaning a page number of 0 in the request equals the first page. |
|
| Page index parameter name. |
|
| General prefix to be prepended to the page number and page size parameters. | |
| Delimiter to be used between the qualifier and the actual page number and size properties. |
|
| Page size parameter name. |
|
| Sort parameter name. |
|
| Commons DBCP2 specific settings bound to an instance of DBCP2's BasicDataSource | |
| Fully qualified name of the JDBC driver. Auto-detected based on the URL by default. | |
| Connection details for an embedded database. Defaults to the most suitable embedded database that is available on the classpath. | |
| Whether to generate a random datasource name. |
|
| Hikari specific settings bound to an instance of Hikari's HikariDataSource | |
| JNDI location of the datasource. Class, url, username and password are ignored when set. | |
| Datasource name to use if "generate-unique-name" is false. Defaults to "testdb" when using an embedded database, otherwise null. | |
| Oracle UCP specific settings bound to an instance of Oracle UCP's PoolDataSource | |
| Login password of the database. | |
| Tomcat datasource specific settings bound to an instance of Tomcat JDBC's DataSource | |
| Fully qualified name of the connection pool implementation to use. By default, it is auto-detected from the classpath. | |
| JDBC URL of the database. | |
| Login username of the database. | |
| XA datasource fully qualified name. | |
| Properties to pass to the XA data source. | |
| Connection timeout used when communicating with Elasticsearch. |
|
| Password for authentication with Elasticsearch. | |
| Prefix added to the path of every request sent to Elasticsearch. | |
| Delay of a sniff execution scheduled after a failure. |
|
| Interval between consecutive ordinary sniff executions. |
|
| Socket timeout used when communicating with Elasticsearch. |
|
| Comma-separated list of the Elasticsearch instances to use. |
|
| Username for authentication with Elasticsearch. | |
| Limit on the number of bytes that can be buffered whenever the input stream needs to be aggregated. | |
| Whether to enable the console. |
|
| Path at which the console is available. |
|
| Whether to enable trace output. |
|
| Password to access preferences and tools of H2 Console. | |
| Whether to enable remote access. |
|
| Login password. | |
| URL of the InfluxDB instance to which to connect. | |
| Login user. | |
| Number of rows that should be fetched from the database when more rows are needed. Use -1 to use the JDBC driver's default configuration. |
|
| Maximum number of rows. Use -1 to use the JDBC driver's default configuration. |
|
| Query timeout. Default is to use the JDBC driver's default configuration. If a duration suffix is not specified, seconds will be used. | |
| SQL dialect to use. Auto-detected by default. | |
| Target database to operate on, auto-detected by default. Can be alternatively set using the "databasePlatform" property. | |
| Name of the target database to operate on, auto-detected by default. Can be alternatively set using the "Database" enum. | |
| Whether to defer DataSource initialization until after any EntityManagerFactory beans have been created and initialized. |
|
| Whether to initialize the schema on startup. |
|
| DDL mode. This is actually a shortcut for the "hibernate.hbm2ddl.auto" property. Defaults to "create-drop" when using an embedded database and no schema manager was detected. Otherwise, defaults to "none". | |
| Fully qualified name of the implicit naming strategy. | |
| Fully qualified name of the physical naming strategy. | |
| Whether to use Hibernate's newer IdentifierGenerator for AUTO, TABLE and SEQUENCE. This is actually a shortcut for the "hibernate.id.new_generator_mappings" property. When not specified will default to "true". | |
| Mapping resources (equivalent to "mapping-file" entries in persistence.xml). | |
| Register OpenEntityManagerInViewInterceptor. Binds a JPA EntityManager to the thread for the entire processing of the request. |
|
| Additional native properties to set on the JPA provider. | |
| Whether to enable logging of SQL statements. |
|
| Whether read-only operations should use an anonymous environment. Disabled by default unless a username is set. | |
| Base suffix from which all operations should originate. | |
| LDAP specification settings. | |
| List of base DNs. | |
| Embedded LDAP password. | |
| Embedded LDAP username. | |
| Schema (LDIF) script resource reference. |
|
| Embedded LDAP port. |
|
| Whether to enable LDAP schema validation. |
|
| Path to the custom schema. | |
| Login password of the server. | |
| Whether NameNotFoundException should be ignored in searches via the LdapTemplate. |
|
| Whether PartialResultException should be ignored in searches via the LdapTemplate. |
|
| Whether SizeLimitExceededException should be ignored in searches via the LdapTemplate. |
|
| LDAP URLs of the server. | |
| Login username of the server. | |
| Directory used for data storage. | |
| Maximum size of the oplog. | |
| Name of the replica set. | |
| Version of Mongo to use. | |
| Kerberos ticket for connecting to the database. Mutual exclusive with a given username. | |
| Login password of the server. | |
| Realm to connect to. | |
| Login user of the server. | |
| Timeout for borrowing connections from the pool. |
|
| Maximum time transactions are allowed to retry. |
|
| Acquisition of new connections will be attempted for at most configured timeout. |
|
| Pooled connections that have been idle in the pool for longer than this threshold will be tested before they are used again. | |
| Whether to log leaked sessions. |
|
| Pooled connections older than this threshold will be closed and removed from the pool. |
|
| Maximum amount of connections in the connection pool towards a single database. |
|
| Whether to enable metrics. |
|
| Path to the file that holds the trusted certificates. | |
| Whether the driver should use encrypted traffic. |
|
| Whether hostname verification is required. |
|
| Trust strategy to use. |
|
| URI used by the driver. |
|
| Whether to generate a random database name. Ignore any configured name when enabled. |
|
| Database name. Set if no name is specified in the url. Default to "testdb" when using an embedded database. | |
| Login password of the database. Set if no password is specified in the url. | |
| Whether pooling is enabled. Requires r2dbc-pool. |
|
| Initial connection pool size. |
|
| Maximum time to acquire a connection from the pool. By default, wait indefinitely. | |
| Maximum time to wait to create a new connection. By default, wait indefinitely. | |
| Maximum amount of time that a connection is allowed to sit idle in the pool. |
|
| Maximum lifetime of a connection in the pool. By default, connections have an infinite lifetime. | |
| Maximal connection pool size. |
|
| Validation depth. |
|
| Validation query. | |
| Additional R2DBC options. | |
| R2DBC URL of the database. database name, username, password and pooling options specified in the url take precedence over individual options. | |
| Login username of the database. Set if no username is specified in the url. | |
| Client name to be set on connections with CLIENT SETNAME. | |
| Type of client to use. By default, auto-detected according to the classpath. | |
| Maximum number of redirects to follow when executing commands across the cluster. | |
| Comma-separated list of "host:port" pairs to bootstrap from. This represents an "initial" list of cluster nodes and is required to have at least one entry. | |
| Connection timeout. | |
| Database index used by the connection factory. |
|
| Redis server host. |
|
| Whether to enable the pool. Enabled automatically if "commons-pool2" is available. With Jedis, pooling is implicitly enabled in sentinel mode and this setting only applies to single node setup. | |
| Maximum number of connections that can be allocated by the pool at a given time. Use a negative value for no limit. |
|
| Maximum number of "idle" connections in the pool. Use a negative value to indicate an unlimited number of idle connections. |
|
| Maximum amount of time a connection allocation should block before throwing an exception when the pool is exhausted. Use a negative value to block indefinitely. |
|
| Target for the minimum number of idle connections to maintain in the pool. This setting only has an effect if both it and time between eviction runs are positive. |
|
| Time between runs of the idle object evictor thread. When positive, the idle object evictor thread starts, otherwise no idle object eviction is performed. | |
| Whether adaptive topology refreshing using all available refresh triggers should be used. |
|
| Whether to discover and query all cluster nodes for obtaining the cluster topology. When set to false, only the initial seed nodes are used as sources for topology discovery. |
|
| Cluster topology refresh period. | |
| Whether to enable the pool. Enabled automatically if "commons-pool2" is available. With Jedis, pooling is implicitly enabled in sentinel mode and this setting only applies to single node setup. | |
| Maximum number of connections that can be allocated by the pool at a given time. Use a negative value for no limit. |
|
| Maximum number of "idle" connections in the pool. Use a negative value to indicate an unlimited number of idle connections. |
|
| Maximum amount of time a connection allocation should block before throwing an exception when the pool is exhausted. Use a negative value to block indefinitely. |
|
| Target for the minimum number of idle connections to maintain in the pool. This setting only has an effect if both it and time between eviction runs are positive. |
|
| Time between runs of the idle object evictor thread. When positive, the idle object evictor thread starts, otherwise no idle object eviction is performed. | |
| Shutdown timeout. |
|
| Login password of the redis server. | |
| Redis server port. |
|
| Name of the Redis server. | |
| Comma-separated list of "host:port" pairs. | |
| Password for authenticating with sentinel(s). | |
| Login username for authenticating with sentinel(s). | |
| Whether to enable SSL support. |
|
| Read timeout. | |
| Connection URL. Overrides host, port, and password. User is ignored. Example: redis://user:[email protected]:6379 | |
| Login username of the redis server. |
.A.6. Transaction Properties
Name | Description | Default Value |
---|---|---|
| Timeout, in seconds, for borrowing connections from the pool. |
|
| Whether to ignore the transacted flag when creating session. |
|
| Whether local transactions are desired. |
|
| Time, in seconds, between runs of the pool's maintenance thread. |
|
| Time, in seconds, after which connections are cleaned up from the pool. |
|
| Time, in seconds, that a connection can be pooled for before being destroyed. 0 denotes no limit. |
|
| Maximum size of the pool. |
|
| Minimum size of the pool. |
|
| Reap timeout, in seconds, for borrowed connections. 0 denotes no limit. |
|
| Unique name used to identify the resource during recovery. |
|
| Vendor-specific implementation of XAConnectionFactory. | |
| Vendor-specific XA properties. | |
| Timeout, in seconds, for borrowing connections from the pool. |
|
| Whether to use concurrent connection validation. |
|
| Default isolation level of connections provided by the pool. | |
| Timeout, in seconds, for establishing a database connection. |
|
| Time, in seconds, between runs of the pool's maintenance thread. |
|
| Time, in seconds, after which connections are cleaned up from the pool. |
|
| Time, in seconds, that a connection can be pooled for before being destroyed. 0 denotes no limit. |
|
| Maximum size of the pool. |
|
| Minimum size of the pool. |
|
| Reap timeout, in seconds, for borrowed connections. 0 denotes no limit. |
|
| SQL query or statement used to validate a connection before returning it. | |
| Unique name used to identify the resource during recovery. |
|
| Vendor-specific implementation of XAConnectionFactory. | |
| Vendor-specific XA properties. | |
| Specify whether sub-transactions are allowed. |
|
| Interval between checkpoints, expressed as the number of log writes between two checkpoints. A checkpoint reduces the log file size at the expense of adding some overhead in the runtime. |
|
| Default timeout for JTA transactions. |
|
| How long should normal shutdown (no-force) wait for transactions to complete. | |
| Whether to enable disk logging. |
|
| Whether a VM shutdown should trigger forced shutdown of the transaction core. |
|
| Directory in which the log files should be stored. Defaults to the current working directory. | |
| Transactions log file base name. |
|
| Maximum number of active transactions. |
|
| Maximum timeout that can be allowed for transactions. |
|
| Delay between two recovery scans. |
|
| Delay after which recovery can cleanup pending ('orphaned') log entries. |
|
| Number of retry attempts to commit the transaction before throwing an exception. |
|
| Delay between retry attempts. |
|
| Whether sub-transactions should be joined when possible. |
|
| Transaction manager implementation that should be started. | |
| Whether to use different (and concurrent) threads for two-phase commit on the participating resources. |
|
| The transaction manager's unique name. Defaults to the machine's IP address. If you plan to run more than one transaction manager against one database you must set this property to a unique value. | |
| Whether to enable JTA support. |
|
| Transaction logs directory. | |
| Transaction manager unique identifier. | |
| Default transaction timeout. If a duration suffix is not specified, seconds will be used. | |
| Whether to roll back on commit failures. |
.A.8. Integration Properties
Name | Description | Default Value |
---|---|---|
| URL of the ActiveMQ broker. Auto-generated by default. | |
| Time to wait before considering a close complete. |
|
| Whether the default broker URL should be in memory. Ignored if an explicit broker has been specified. |
|
| Whether to stop message delivery before re-delivering messages from a rolled back transaction. This implies that message order is not preserved when this is enabled. |
|
| Whether to trust all packages. | |
| Comma-separated list of specific packages to trust (when not trusting all packages). | |
| Login password of the broker. | |
| Whether to block when a connection is requested and the pool is full. Set it to false to throw a "JMSException" instead. |
|
| Blocking period before throwing an exception if the pool is still full. |
|
| Whether a JmsPoolConnectionFactory should be created, instead of a regular ConnectionFactory. |
|
| Connection idle timeout. |
|
| Maximum number of pooled connections. |
|
| Maximum number of pooled sessions per connection in the pool. |
|
| Time to sleep between runs of the idle connection eviction thread. When negative, no idle connection eviction thread runs. |
|
| Whether to use only one anonymous "MessageProducer" instance. Set it to false to create one "MessageProducer" every time one is required. |
|
| Time to wait on message sends for a response. Set it to 0 to wait forever. |
|
| Login user of the broker. | |
| Artemis broker port. |
|
| Cluster password. Randomly generated on startup by default. | |
| Journal file directory. Not necessary if persistence is turned off. | |
| Whether to enable embedded mode if the Artemis server APIs are available. |
|
| Whether to enable persistent store. |
|
| Comma-separated list of queues to create on startup. |
|
| Server ID. By default, an auto-incremented counter is used. |
|
| Comma-separated list of topics to create on startup. |
|
| Artemis deployment mode, auto-detected by default. | |
| Login password of the broker. | |
| Whether to block when a connection is requested and the pool is full. Set it to false to throw a "JMSException" instead. |
|
| Blocking period before throwing an exception if the pool is still full. |
|
| Whether a JmsPoolConnectionFactory should be created, instead of a regular ConnectionFactory. |
|
| Connection idle timeout. |
|
| Maximum number of pooled connections. |
|
| Maximum number of pooled sessions per connection in the pool. |
|
| Time to sleep between runs of the idle connection eviction thread. When negative, no idle connection eviction thread runs. |
|
| Whether to use only one anonymous "MessageProducer" instance. Set it to false to create one "MessageProducer" every time one is required. |
|
| Login user of the broker. | |
| Database schema initialization mode. |
|
| Transaction isolation level to use when creating job meta-data for new jobs. Auto-detected based on whether JPA is being used or not. | |
| Platform to use in initialization scripts if the @@[email protected]@ placeholder is used. Auto-detected by default. | |
| Path to the SQL file to use to initialize the database schema. |
|
| Table prefix for all the batch meta-data tables. | |
| Execute all Spring Batch jobs in the context on startup. |
|
| Comma-separated list of job names to execute on startup (for instance, 'job1,job2'). By default, all Jobs found in the context are executed. | |
| The location of the configuration file to use to initialize Hazelcast. | |
| Whether to create input channels if necessary. |
|
| Default number of subscribers allowed on, for example, a 'PublishSubscribeChannel'. | |
| Default number of subscribers allowed on, for example, a 'DirectChannel'. | |
| A comma-separated list of endpoint bean names patterns that should not be started automatically during application startup. | |
| A comma-separated list of message header names that should not be populated into Message instances during a header copying operation. | |
| Whether to throw an exception when a reply is not expected anymore by a gateway. |
|
| Whether to ignore failures for one or more of the handlers of the global 'errorChannel'. |
|
| Whether to not silently ignore messages on the global 'errorChannel' when they are no subscribers. |
|
| Database schema initialization mode. |
|
| Platform to use in initialization scripts if the @@[email protected]@ placeholder is used. Auto-detected by default. | |
| Path to the SQL file to use to initialize the database schema. |
|
| Whether Spring Integration components should perform logging in the main message flow. When disabled, such logging will be skipped without checking the logging level. When enabled, such logging is controlled as normal by the logging system's log level configuration. |
|
| Cron expression for polling. Mutually exclusive with 'fixedDelay' and 'fixedRate'. | |
| Polling delay period. Mutually exclusive with 'cron' and 'fixedRate'. | |
| Polling rate period. Mutually exclusive with 'fixedDelay' and 'cron'. | |
| Polling initial delay. Applied for 'fixedDelay' and 'fixedRate'; ignored for 'cron'. | |
| Maximum number of messages to poll per polling cycle. | |
| How long to wait for messages on poll. |
|
| TCP RSocket server host to connect to. | |
| TCP RSocket server port to connect to. | |
| WebSocket RSocket server uri to connect to. | |
| Whether to handle message mapping for RSocket via Spring Integration. |
|
| Whether to cache message consumers. |
|
| Whether to cache sessions. |
|
| Whether to cache message producers. |
|
| Size of the session cache (per JMS Session type). |
|
| Connection factory JNDI name. When set, takes precedence to others connection factory auto-configurations. | |
| Acknowledge mode of the container. By default, the listener is transacted with automatic acknowledgment. | |
| Start the container automatically on startup. |
|
| Minimum number of concurrent consumers. | |
| Maximum number of concurrent consumers. | |
| Timeout to use for receive calls. Use -1 for a no-wait receive or 0 for no timeout at all. The latter is only feasible if not running within a transaction manager and is generally discouraged since it prevents clean shutdown. |
|
| Whether the default destination type is topic. |
|
| Default destination to use on send and receive operations that do not have a destination parameter. | |
| Delivery delay to use for send calls. | |
| Delivery mode. Enables QoS (Quality of Service) when set. | |
| Priority of a message when sending. Enables QoS (Quality of Service) when set. | |
| Whether to enable explicit QoS (Quality of Service) when sending a message. When enabled, the delivery mode, priority and time-to-live properties will be used when sending a message. QoS is automatically enabled when at least one of those settings is customized. | |
| Timeout to use for receive calls. | |
| Time-to-live of a message when sending. Enables QoS (Quality of Service) when set. | |
| ID to pass to the server when making requests. Used for server-side logging. | |
| Whether to fail fast if the broker is not available on startup. |
|
| Additional admin-specific properties used to configure the client. | |
| Security protocol used to communicate with brokers. | |
| Password of the private key in either key store key or key store file. | |
| Certificate chain in PEM format with a list of X.509 certificates. | |
| Private key in PEM format with PKCS#8 keys. | |
| Location of the key store file. | |
| Store password for the key store file. | |
| Type of the key store. | |
| SSL protocol to use. | |
| Trusted certificates in PEM format with X.509 certificates. | |
| Location of the trust store file. | |
| Store password for the trust store file. | |
| Type of the trust store. | |
| Comma-delimited list of host:port pairs to use for establishing the initial connections to the Kafka cluster. Applies to all components unless overridden. | |
| ID to pass to the server when making requests. Used for server-side logging. | |
| Frequency with which the consumer offsets are auto-committed to Kafka if 'enable.auto.commit' is set to true. | |
| What to do when there is no initial offset in Kafka or if the current offset no longer exists on the server. | |
| Comma-delimited list of host:port pairs to use for establishing the initial connections to the Kafka cluster. Overrides the global property, for consumers. | |
| ID to pass to the server when making requests. Used for server-side logging. | |
| Whether the consumer's offset is periodically committed in the background. | |
| Maximum amount of time the server blocks before answering the fetch request if there isn't sufficient data to immediately satisfy the requirement given by "fetch-min-size". | |
| Minimum amount of data the server should return for a fetch request. | |
| Unique string that identifies the consumer group to which this consumer belongs. | |
| Expected time between heartbeats to the consumer coordinator. | |
| Isolation level for reading messages that have been written transactionally. |
|
| Deserializer class for keys. | |
| Maximum number of records returned in a single call to poll(). | |
| Additional consumer-specific properties used to configure the client. | |
| Security protocol used to communicate with brokers. | |
| Password of the private key in either key store key or key store file. | |
| Certificate chain in PEM format with a list of X.509 certificates. | |
| Private key in PEM format with PKCS#8 keys. | |
| Location of the key store file. | |
| Store password for the key store file. | |
| Type of the key store. | |
| SSL protocol to use. | |
| Trusted certificates in PEM format with X.509 certificates. | |
| Location of the trust store file. | |
| Store password for the trust store file. | |
| Type of the trust store. | |
| Deserializer class for values. | |
| Control flag for login configuration. |
|
| Whether to enable JAAS configuration. |
|
| Login module. |
|
| Additional JAAS options. | |
| Number of records between offset commits when ackMode is "COUNT" or "COUNT_TIME". | |
| Listener AckMode. See the spring-kafka documentation. | |
| Time between offset commits when ackMode is "TIME" or "COUNT_TIME". | |
| Prefix for the listener's consumer client.id property. | |
| Number of threads to run in the listener containers. | |
| Sleep interval between Consumer.poll(Duration) calls. |
|
| Time between publishing idle consumer events (no data received). | |
| Time between publishing idle partition consumer events (no data received for partition). | |
| Whether the container stops after the current record is processed or after all the records from the previous poll are processed. |
|
| Whether to log the container configuration during initialization (INFO level). | |
| Whether the container should fail to start if at least one of the configured topics are not present on the broker. |
|
| Time between checks for non-responsive consumers. If a duration suffix is not specified, seconds will be used. | |
| Multiplier applied to "pollTimeout" to determine if a consumer is non-responsive. | |
| Timeout to use when polling the consumer. | |
| Listener type. |
|
| Number of acknowledgments the producer requires the leader to have received before considering a request complete. | |
| Default batch size. A small batch size will make batching less common and may reduce throughput (a batch size of zero disables batching entirely). | |
| Comma-delimited list of host:port pairs to use for establishing the initial connections to the Kafka cluster. Overrides the global property, for producers. | |
| Total memory size the producer can use to buffer records waiting to be sent to the server. | |
| ID to pass to the server when making requests. Used for server-side logging. | |
| Compression type for all data generated by the producer. | |
| Serializer class for keys. | |
| Additional producer-specific properties used to configure the client. | |
| When greater than zero, enables retrying of failed sends. | |
| Security protocol used to communicate with brokers. | |
| Password of the private key in either key store key or key store file. | |
| Certificate chain in PEM format with a list of X.509 certificates. | |
| Private key in PEM format with PKCS#8 keys. | |
| Location of the key store file. | |
| Store password for the key store file. | |
| Type of the key store. | |
| SSL protocol to use. | |
| Trusted certificates in PEM format with X.509 certificates. | |
| Location of the trust store file. | |
| Store password for the trust store file. | |
| Type of the trust store. | |
| When non empty, enables transaction support for producer. | |
| Serializer class for values. | |
| Additional properties, common to producers and consumers, used to configure the client. | |
| Total number of processing attempts made before sending the message to the DLT. |
|
| Canonical backoff period. Used as an initial value in the exponential case, and as a minimum value in the uniform case. |
|
| Whether to enable topic-based non-blocking retries. |
|
| Maximum wait between retries. If less than the delay then the default of 30 seconds is applied. |
|
| Multiplier to use for generating the next backoff delay. |
|
| Whether to have the backoff delays. |
|
| Security protocol used to communicate with brokers. | |
| Password of the private key in either key store key or key store file. | |
| Certificate chain in PEM format with a list of X.509 certificates. | |
| Private key in PEM format with PKCS#8 keys. | |
| Location of the key store file. | |
| Store password for the key store file. | |
| Type of the key store. | |
| SSL protocol to use. | |
| Trusted certificates in PEM format with X.509 certificates. | |
| Location of the trust store file. | |
| Store password for the trust store file. | |
| Type of the trust store. | |
| Kafka streams application.id property; default spring.application.name. | |
| Whether to auto-start the streams factory bean. |
|
| Comma-delimited list of host:port pairs to use for establishing the initial connections to the Kafka cluster. Overrides the global property, for streams. | |
| Maximum memory size to be used for buffering across all threads. | |
| Cleanup the application’s local state directory on shutdown. |
|
| Cleanup the application’s local state directory on startup. |
|
| ID to pass to the server when making requests. Used for server-side logging. | |
| Additional Kafka properties used to configure the streams. | |
| The replication factor for change log topics and repartition topics created by the stream processing application. | |
| Security protocol used to communicate with brokers. | |
| Password of the private key in either key store key or key store file. | |
| Certificate chain in PEM format with a list of X.509 certificates. | |
| Private key in PEM format with PKCS#8 keys. | |
| Location of the key store file. | |
| Store password for the key store file. | |
| Type of the key store. | |
| SSL protocol to use. | |
| Trusted certificates in PEM format with X.509 certificates. | |
| Location of the trust store file. | |
| Store password for the trust store file. | |
| Type of the trust store. | |
| Directory location for the state store. | |
| Default topic to which messages are sent. | |
| Transaction id prefix, override the transaction id prefix in the producer factory. | |
| Mode used to shuffle configured addresses. |
|
| Comma-separated list of addresses to which the client should connect. When set, the host and port are ignored. | |
| Duration to wait to obtain a channel if the cache size has been reached. If 0, always create a new channel. | |
| Number of channels to retain in the cache. When "check-timeout" > 0, max channels per connection. | |
| Connection factory cache mode. |
|
| Number of connections to cache. Only applies when mode is CONNECTION. | |
| Continuation timeout for RPC calls in channels. Set it to zero to wait forever. |
|
| Connection timeout. Set it to zero to wait forever. | |
| Whether to create an AmqpAdmin bean. |
|
| RabbitMQ host. Ignored if an address is set. |
|
| Acknowledge mode of container. | |
| Whether to start the container automatically on startup. |
|
| Number of consumers per queue. | |
| Whether the container should present batched messages as discrete messages or call the listener with the batch. |
|
| Whether rejected deliveries are re-queued by default. | |
| How often idle container events should be published. | |
| Whether to fail if the queues declared by the container are not available on the broker. |
|
| Maximum number of unacknowledged messages that can be outstanding at each consumer. | |
| Whether publishing retries are enabled. |
|
| Duration between the first and second attempt to deliver a message. |
|
| Maximum number of attempts to deliver a message. |
|
| Maximum duration between attempts. |
|
| Multiplier to apply to the previous retry interval. |
|
| Whether retries are stateless or stateful. |
|
| Acknowledge mode of container. | |
| Whether to start the container automatically on startup. |
|
| Batch size, expressed as the number of physical messages, to be used by the container. | |
| Minimum number of listener invoker threads. | |
| Whether the container creates a batch of messages based on the 'receive-timeout' and 'batch-size'. Coerces 'de-batching-enabled' to true to include the contents of a producer created batch in the batch as discrete records. |
|
| Whether the container should present batched messages as discrete messages or call the listener with the batch. |
|
| Whether rejected deliveries are re-queued by default. | |
| How often idle container events should be published. | |
| Maximum number of listener invoker threads. | |
| Whether to fail if the queues declared by the container are not available on the broker and/or whether to stop the container if one or more queues are deleted at runtime. |
|
| Maximum number of unacknowledged messages that can be outstanding at each consumer. | |
| Whether publishing retries are enabled. |
|
| Duration between the first and second attempt to deliver a message. |
|
| Maximum number of attempts to deliver a message. |
|
| Maximum duration between attempts. |
|
| Multiplier to apply to the previous retry interval. |
|
| Whether retries are stateless or stateful. |
|
| Whether to start the container automatically on startup. |
|
| Whether the container will support listeners that consume native stream messages instead of Spring AMQP messages. |
|
| Listener container type. |
|
| Login to authenticate against the broker. |
|
| RabbitMQ port. Ignored if an address is set. Default to 5672, or 5671 if SSL is enabled. | |
| Type of publisher confirms to use. | |
| Whether to enable publisher returns. |
|
| Number of channels per connection requested by the client. Use 0 for unlimited. |
|
| Requested heartbeat timeout; zero for none. If a duration suffix is not specified, seconds will be used. | |
| SSL algorithm to use. By default, configured by the Rabbit client library. | |
| Whether to enable SSL support. Determined automatically if an address is provided with the protocol (amqp:// vs. amqps://). | |
| Path to the key store that holds the SSL certificate. | |
| Key store algorithm. |
|
| Password used to access the key store. | |
| Key store type. |
|
| Trust store that holds SSL certificates. | |
| Trust store algorithm. |
|
| Password used to access the trust store. | |
| Trust store type. |
|
| Whether to enable server side certificate validation. |
|
| Whether to enable hostname verification. |
|
| Host of a RabbitMQ instance with the Stream plugin enabled. |
|
| Name of the stream. | |
| Login password to authenticate to the broker. When not set spring.rabbitmq.password is used. | |
| Stream port of a RabbitMQ instance with the Stream plugin enabled. | |
| Login user to authenticate to the broker. When not set, spring.rabbitmq.username is used. | |
| Name of the default queue to receive messages from when none is specified explicitly. | |
| Name of the default exchange to use for send operations. | |
| Whether to enable mandatory messages. | |
| Timeout for receive() operations. | |
| Timeout for sendAndReceive() operations. | |
| Whether publishing retries are enabled. |
|
| Duration between the first and second attempt to deliver a message. |
|
| Maximum number of attempts to deliver a message. |
|
| Maximum duration between attempts. |
|
| Multiplier to apply to the previous retry interval. |
|
| Value of a default routing key to use for send operations. | |
| Login user to authenticate to the broker. |
|
| Virtual host to use when connecting to the broker. | |
| Path that serves as the base URI for the services. |
|
| Servlet init parameters to pass to Spring Web Services. | |
| Load on startup priority of the Spring Web Services servlet. |
|
| Comma-separated list of locations of WSDLs and accompanying XSDs to be exposed as beans. |
.A.9. Web Properties
Name | Description | Default Value |
---|---|---|
| Whether credentials are supported. When not set, credentials are not supported. | |
| Comma-separated list of HTTP headers to allow in a request. '*' allows all headers. | |
| Comma-separated list of HTTP methods to allow. '*' allows all methods. When not set, defaults to GET. | |
| Comma-separated list of origin patterns to allow. Unlike allowed origins which only support '*', origin patterns are more flexible, e.g. 'https://*.example.com', and can be used with allow-credentials. When neither allowed origins nor allowed origin patterns are set, cross-origin requests are effectively disabled. | |
| Comma-separated list of origins to allow with '*' allowing all origins. When allow-credentials is enabled, '*' cannot be used, and setting origin patterns should be considered instead. When neither allowed origins nor allowed origin patterns are set, cross-origin requests are effectively disabled. | |
| Comma-separated list of headers to include in a response. | |
| How long the response from a pre-flight request can be cached by clients. If a duration suffix is not specified, seconds will be used. |
|
| Whether the default GraphiQL UI is enabled. |
|
| Path to the GraphiQL UI endpoint. |
|
| Path at which to expose a GraphQL request HTTP endpoint. |
|
| Mapping of the RSocket message handler. | |
| File extensions for GraphQL schema files. |
|
| Whether field introspection should be enabled at the schema level. |
|
| Locations of GraphQL schema files. |
|
| Whether the endpoint that prints the schema is enabled. Schema is available under spring.graphql.path + "/schema". |
|
| Time within which the initial {@code CONNECTION_INIT} type message must be received. |
|
| Path of the GraphQL WebSocket subscription endpoint. | |
| Whether application/hal+json responses should be sent to requests that accept application/json. |
|
| Path that serves as the base URI for the application. If specified, overrides the value of "@ApplicationPath". | |
| Jersey filter chain order. |
|
| Init parameters to pass to Jersey through the servlet or filter. | |
| Load on startup priority of the Jersey servlet. |
|
| Jersey integration type. |
|
| Amount of time before asynchronous request handling times out. If this value is not set, the default timeout of the underlying implementation is used. | |
| Whether a request parameter ("format" by default) should be used to determine the requested media type. |
|
| Map file extensions to media types for content negotiation. For instance, yml to text/yaml. | |
| Query parameter name to use when "favor-parameter" is enabled. | |
| Preferred JSON mapper to use for HTTP message conversion. By default, auto-detected according to the environment. | |
| Whether to dispatch OPTIONS requests to the FrameworkServlet doService method. |
|
| Whether to dispatch TRACE requests to the FrameworkServlet doService method. |
|
| Date format to use, for example 'dd/MM/yyyy'. | |
| Date-time format to use, for example 'yyyy-MM-dd HH:mm:ss'. | |
| Time format to use, for example 'HH:mm:ss'. | |
| Whether to enable Spring's FormContentFilter. |
|
| Whether to enable Spring's HiddenHttpMethodFilter. |
|
| Whether the content of the "default" model should be ignored during redirect scenarios. |
|
| Whether logging of (potentially sensitive) request details at DEBUG and TRACE level is allowed. |
|
| Whether to enable warn logging of exceptions resolved by a "HandlerExceptionResolver", except for "DefaultHandlerExceptionResolver". |
|
| Formatting strategy for message codes. For instance, 'PREFIX_ERROR_CODE'. | |
| Choice of strategy for matching request paths against registered mappings. |
|
| Whether to publish a ServletRequestHandledEvent at the end of each request. |
|
| Load on startup priority of the dispatcher servlet. |
|
| Path of the dispatcher servlet. Setting a custom value for this property is not compatible with the PathPatternParser matching strategy. |
|
| Path pattern used for static resources. |
|
| Whether a "NoHandlerFoundException" should be thrown if no Handler was found to process a request. |
|
| Spring MVC view prefix. | |
| Spring MVC view suffix. | |
| Level of leak detection for reference-counted buffers. If not configured via 'ResourceLeakDetector.setLevel' or the 'io.netty.leakDetection.level' system property, default to 'simple'. | |
| Whether to enable support of multipart uploads. |
|
| Threshold after which files are written to disk. |
|
| Intermediate location of uploaded files. | |
| Max file size. |
|
| Max request size. |
|
| Whether to resolve the multipart request lazily at the time of file or parameter access. |
|
| Sessions flush mode. Determines when session changes are written to the session store. |
|
| Name of the map used to store sessions. |
|
| Sessions save mode. Determines how session changes are tracked and saved to the session store. |
|
| Cron expression for expired session cleanup job. |
|
| Sessions flush mode. Determines when session changes are written to the session store. |
|
| Database schema initialization mode. |
|
| Platform to use in initialization scripts if the @@[email protected]@ placeholder is used. Auto-detected by default. | |
| Sessions save mode. Determines how session changes are tracked and saved to the session store. |
|
| Path to the SQL file to use to initialize the database schema. |
|
| Name of the database table used to store sessions. |
|
| Collection name used to store sessions. |
|
| Cron expression for expired session cleanup job. |
|
| The configure action to apply when no user defined ConfigureRedisAction bean is present. |
|
| Sessions flush mode. Determines when session changes are written to the session store. |
|
| Namespace for keys used to store sessions. |
|
| Sessions save mode. Determines how session changes are tracked and saved to the session store. |
|
| Session repository filter dispatcher types. |
|
| Session repository filter order. | |
| Session store type. | |
| Session timeout. If a duration suffix is not specified, seconds will be used. | |
| Locale to use. By default, this locale is overridden by the "Accept-Language" header. | |
| Define how the locale should be resolved. |
|
| Whether to enable default resource handling. |
|
| Indicate that the response message is intended for a single user and must not be stored by a shared cache. | |
| Indicate that any cache may store the response. | |
| Maximum time the response should be cached, in seconds if no duration suffix is not specified. | |
| Indicate that once it has become stale, a cache must not use the response without re-validating it with the server. | |
| Indicate that the cached response can be reused only if re-validated with the server. | |
| Indicate to not cache the response in any case. | |
| Indicate intermediaries (caches and others) that they should not transform the response content. | |
| Same meaning as the "must-revalidate" directive, except that it does not apply to private caches. | |
| Maximum time the response should be cached by shared caches, in seconds if no duration suffix is not specified. | |
| Maximum time the response may be used when errors are encountered, in seconds if no duration suffix is not specified. | |
| Maximum time the response can be served after it becomes stale, in seconds if no duration suffix is not specified. | |
| Cache period for the resources served by the resource handler. If a duration suffix is not specified, seconds will be used. Can be overridden by the 'spring.web.resources.cache.cachecontrol' properties. | |
| Whether we should use the "lastModified" metadata of the files in HTTP caching headers. |
|
| Whether to enable caching in the Resource chain. |
|
| Whether to enable resolution of already compressed resources (gzip, brotli). Checks for a resource name with the '.gz' or '.br' file extensions. |
|
| Whether to enable the Spring Resource Handling chain. By default, disabled unless at least one strategy has been enabled. | |
| Whether to enable the content Version Strategy. |
|
| Comma-separated list of patterns to apply to the content Version Strategy. |
|
| Whether to enable the fixed Version Strategy. |
|
| Comma-separated list of patterns to apply to the fixed Version Strategy. |
|
| Version string to use for the fixed Version Strategy. | |
| Locations of static resources. Defaults to classpath:[/META-INF/resources/, /resources/, /static/, /public/]. |
|
| Base path for all web handlers. | |
| Date format to use, for example 'dd/MM/yyyy'. | |
| Date-time format to use, for example 'yyyy-MM-dd HH:mm:ss'. | |
| Time format to use, for example 'HH:mm:ss'. | |
| Whether to enable Spring's HiddenHttpMethodFilter. |
|
| Directory used to store file parts larger than 'maxInMemorySize'. Default is a directory named 'spring-multipart' created under the system temporary directory. Ignored when streaming is enabled. | |
| Character set used to decode headers. |
|
| Maximum amount of disk space allowed per part. Default is -1 which enforces no limits. Ignored when streaming is enabled. |
|
| Maximum amount of memory allowed per headers section of each part. Set to -1 to enforce no limits. |
|
| Maximum amount of memory allowed per part before it's written to disk. Set to -1 to store all contents in memory. Ignored when streaming is enabled. |
|
| Maximum number of parts allowed in a given multipart request. Default is -1 which enforces no limits. |
|
| Whether to stream directly from the parsed input buffer stream without storing in memory nor file. Default is non-streaming. |
|
| Path pattern used for static resources. |
|
.A.10. Templating Properties
Name | Description | Default Value |
---|---|---|
| Whether HttpServletRequest attributes are allowed to override (hide) controller generated model attributes of the same name. |
|
| Whether HttpSession attributes are allowed to override (hide) controller generated model attributes of the same name. |
|
| Whether to enable template caching. |
|
| Template encoding. |
|
| Whether to check that the templates location exists. |
|
| Content-Type value. |
|
| Whether to enable MVC view resolution for this technology. |
|
| Whether all request attributes should be added to the model prior to merging with the template. |
|
| Whether all HttpSession attributes should be added to the model prior to merging with the template. |
|
| Whether to expose a RequestContext for use by Spring's macro library, under the name "springMacroRequestContext". |
|
| Whether to prefer file system access for template loading to enable hot detection of template changes. When a template path is detected as a directory, templates are loaded from the directory only and other matching classpath locations will not be considered. |
|
| Prefix that gets prepended to view names when building a URL. | |
| Name of the RequestContext attribute for all views. | |
| Well-known FreeMarker keys which are passed to FreeMarker's Configuration. | |
| Suffix that gets appended to view names when building a URL. |
|
| Comma-separated list of template paths. |
|
| View names that can be resolved. | |
| Whether HttpServletRequest attributes are allowed to override (hide) controller generated model attributes of the same name. |
|
| Whether HttpSession attributes are allowed to override (hide) controller generated model attributes of the same name. |
|
| Whether to enable template caching. |
|
| Template encoding. |
|
| Whether to check that the templates location exists. |
|
| See GroovyMarkupConfigurer | |
| Content-Type value. |
|
| Whether to enable MVC view resolution for this technology. |
|
| Whether all request attributes should be added to the model prior to merging with the template. |
|
| Whether all HttpSession attributes should be added to the model prior to merging with the template. |
|
| Whether to expose a RequestContext for use by Spring's macro library, under the name "springMacroRequestContext". |
|
| Prefix that gets prepended to view names when building a URL. | |
| Name of the RequestContext attribute for all views. | |
| Template path. |
|
| Suffix that gets appended to view names when building a URL. |
|
| View names that can be resolved. | |
| Template encoding. |
|
| Whether to check that the templates location exists. |
|
| Whether to enable MVC view resolution for Mustache. |
|
| Prefix to apply to template names. |
|
| Media types supported by Mustache views. |
|
| Name of the RequestContext attribute for all views. | |
| Whether HttpServletRequest attributes are allowed to override (hide) controller generated model attributes of the same name. |
|
| Whether HttpSession attributes are allowed to override (hide) controller generated model attributes of the same name. |
|
| Whether to enable template caching. |
|
| Content-Type value. | |
| Whether all request attributes should be added to the model prior to merging with the template. |
|
| Whether all HttpSession attributes should be added to the model prior to merging with the template. |
|
| Whether to expose a RequestContext for use by Spring's macro library, under the name "springMacroRequestContext". |
|
| Suffix to apply to template names. |
|
| View names that can be resolved. | |
| Whether to enable template caching. |
|
| Whether to check that the template exists before rendering it. |
|
| Whether to check that the templates location exists. |
|
| Enable the SpringEL compiler in SpringEL expressions. |
|
| Whether to enable Thymeleaf view resolution for Web frameworks. |
|
| Template files encoding. |
|
| Comma-separated list of view names (patterns allowed) that should be excluded from resolution. | |
| Template mode to be applied to templates. See also Thymeleaf's TemplateMode enum. |
|
| Prefix that gets prepended to view names when building a URL. |
|
| Comma-separated list of view names (patterns allowed) that should be the only ones executed in CHUNKED mode when a max chunk size is set. | |
| Comma-separated list of view names (patterns allowed) that should be executed in FULL mode even if a max chunk size is set. | |
| Maximum size of data buffers used for writing to the response. Templates will execute in CHUNKED mode by default if this is set. |
|
| Media types supported by the view technology. |
|
| Whether hidden form inputs acting as markers for checkboxes should be rendered before the checkbox element itself. |
|
| Content-Type value written to HTTP responses. |
|
| Whether Thymeleaf should start writing partial output as soon as possible or buffer until template processing is finished. |
|
| Suffix that gets appended to view names when building a URL. |
|
| Order of the template resolver in the chain. By default, the template resolver is first in the chain. Order start at 1 and should only be set if you have defined additional "TemplateResolver" beans. | |
| Comma-separated list of view names (patterns allowed) that can be resolved. |
.A.11. Server Properties
Name | Description | Default Value |
---|---|---|
| Network address to which the server should bind. | |
| Whether response compression is enabled. |
|
| Comma-separated list of user agents for which responses should not be compressed. | |
| Comma-separated list of MIME types that should be compressed. |
|
| Minimum "Content-Length" value that is required for compression to be performed. |
|
| When to include "errors" attribute. |
|
| Include the "exception" attribute. |
|
| When to include "message" attribute. |
|
| When to include the "trace" attribute. |
|
| Path of the error controller. |
|
| Whether to enable the default error page displayed in browsers in case of a server error. |
|
| Strategy for handling X-Forwarded-* headers. | |
| Whether to enable HTTP/2 support, if the current environment supports it. |
|
| Append to log. |
|
| Custom log format, see org.eclipse.jetty.server.CustomRequestLog. If defined, overrides the "format" configuration key. | |
| Enable access log. |
|
| Date format to place in log file name. | |
| Log filename. If not specified, logs redirect to "System.err". | |
| Log format. |
|
| Request paths that should not be logged. | |
| Number of days before rotated log files are deleted. |
|
| Time that the connection can be idle before it is closed. | |
| Maximum size of the form content in any HTTP post request. |
|
| Number of acceptor threads to use. When the value is -1, the default, the number of acceptors is derived from the operating environment. |
|
| Maximum thread idle time. |
|
| Maximum number of threads. |
|
| Maximum capacity of the thread pool's backing queue. A default is computed based on the threading configuration. | |
| Minimum number of threads. |
|
| Number of selector threads to use. When the value is -1, the default, the number of selectors is derived from the operating environment. |
|
| Maximum size of the HTTP message header. |
|
| Connection timeout of the Netty channel. | |
| Maximum content length of an H2C upgrade request. |
|
| Idle timeout of the Netty channel. When not specified, an infinite timeout is used. | |
| Initial buffer size for HTTP request decoding. |
|
| Maximum chunk size that can be decoded for an HTTP request. |
|
| Maximum length that can be decoded for an HTTP request's initial line. |
|
| Maximum number of requests that can be made per connection. By default, a connection serves unlimited number of requests. | |
| Whether to validate headers when decoding requests. |
|
| Server HTTP port. |
|
| Domain for the cookie. | |
| Whether to use "HttpOnly" cookies for the cookie. | |
| Maximum age of the cookie. If a duration suffix is not specified, seconds will be used. A positive value indicates when the cookie expires relative to the current time. A value of 0 means the cookie should expire immediately. A negative value means no "Max-Age". | |
| Name for the cookie. | |
| Path of the cookie. | |
| SameSite setting for the cookie. | |
| Whether to always mark the cookie as secure. | |
| Session timeout. If a duration suffix is not specified, seconds will be used. |
|
| Value to use for the Server response header (if empty, no header is sent). | |
| Display name of the application. |
|
| Servlet context init parameters. | |
| Context path of the application. | |
| Charset of HTTP requests and responses. Added to the "Content-Type" header if not set explicitly. |
|
| Whether to enable http encoding support. |
|
| Whether to force the encoding to the configured charset on HTTP requests and responses. | |
| Whether to force the encoding to the configured charset on HTTP requests. Defaults to true when "force" has not been specified. | |
| Whether to force the encoding to the configured charset on HTTP responses. | |
| Mapping of locale to charset for response encoding. | |
| Class name of the servlet to use for JSPs. If registered is true and this class * is on the classpath then it will be registered. |
|
| Init parameters used to configure the JSP servlet. | |
| Whether the JSP servlet is registered. |
|
| Whether to register the default Servlet with the container. |
|
| Comment for the cookie. | |
| Domain for the cookie. | |
| Whether to use "HttpOnly" cookies for the cookie. | |
| Maximum age of the cookie. If a duration suffix is not specified, seconds will be used. A positive value indicates when the cookie expires relative to the current time. A value of 0 means the cookie should expire immediately. A negative value means no "Max-Age". | |
| Name of the cookie. | |
| Path of the cookie. | |
| SameSite setting for the cookie. | |
| Whether to always mark the cookie as secure. | |
| Whether to persist session data between restarts. |
|
| Directory used to store session data. | |
| Session timeout. If a duration suffix is not specified, seconds will be used. |
|
| Session tracking modes. | |
| Type of shutdown that the server will support. |
|
| Path to a PEM-encoded SSL certificate file. | |
| Path to a PEM-encoded private key file for the SSL certificate. | |
| Supported SSL ciphers. | |
| Client authentication mode. Requires a trust store. | |
| Whether to enable SSL support. |
|
| Enabled SSL protocols. | |
| Alias that identifies the key in the key store. | |
| Password used to access the key in the key store. | |
| Path to the key store that holds the SSL certificate (typically a jks file). | |
| Password used to access the key store. | |
| Provider for the key store. | |
| Type of the key store. | |
| SSL protocol to use. |
|
| Path to a PEM-encoded SSL certificate authority file. | |
| Path to a PEM-encoded private key file for the SSL certificate authority. | |
| Trust store that holds SSL certificates. | |
| Password used to access the trust store. | |
| Provider for the trust store. | |
| Type of the trust store. | |
| Maximum queue length for incoming connection requests when all possible request processing threads are in use. |
|
| Whether to buffer output such that it is flushed only periodically. |
|
| Whether to check for log file existence so it can be recreated it if an external process has renamed it. |
|
| Whether logging of the request will only be enabled if "ServletRequest.getAttribute(conditionIf)" does not yield null. | |
| Whether logging of the request will only be enabled if "ServletRequest.getAttribute(conditionUnless)" yield null. | |
| Directory in which log files are created. Can be absolute or relative to the Tomcat base dir. |
|
| Enable access log. |
|
| Character set used by the log file. Default to the system default character set. | |
| Date format to place in the log file name. |
|
| Whether to use IPv6 canonical representation format as defined by RFC 5952. |
|
| Locale used to format timestamps in log entries and in log file name suffix. Default to the default locale of the Java process. | |
| Number of days to retain the access log files before they are removed. |
|
| Format pattern for access logs. |
|
| Log file name prefix. |
|
| Whether to defer inclusion of the date stamp in the file name until rotate time. |
|
| Set request attributes for the IP address, Hostname, protocol, and port used for the request. |
|
| Whether to enable access log rotation. |
|
| Log file name suffix. |
|
| Comma-separated list of additional patterns that match jars to ignore for TLD scanning. The special '?' and '*' characters can be used in the pattern to match one and only one character and zero or more characters respectively. | |
| Delay between the invocation of backgroundProcess methods. If a duration suffix is not specified, seconds will be used. |
|
| Tomcat base directory. If not specified, a temporary directory is used. | |
| Amount of time the connector will wait, after accepting a connection, for the request URI line to be presented. | |
| Time to wait for another HTTP request before the connection is closed. When not set the connectionTimeout is used. When set to -1 there will be no timeout. | |
| Maximum number of connections that the server accepts and processes at any given time. Once the limit has been reached, the operating system may still accept connections based on the "acceptCount" property. |
|
| Maximum size of the form content in any HTTP post request. |
|
| Maximum number of HTTP requests that can be pipelined before the connection is closed. When set to 0 or 1, keep-alive and pipelining are disabled. When set to -1, an unlimited number of pipelined or keep-alive requests are allowed. |
|
| Maximum amount of request body to swallow. |
|
| Whether Tomcat's MBean Registry should be enabled. |
|
| Maximum number of idle processors that will be retained in the cache and reused with a subsequent request. When set to -1 the cache will be unlimited with a theoretical maximum size equal to the maximum number of connections. |
|
| Whether requests to the context root should be redirected by appending a / to the path. When using SSL terminated at a proxy, this property should be set to false. |
|
| Whether to reject requests with illegal header names or values. |
|
| Comma-separated list of additional unencoded characters that should be allowed in URI paths. Only "< > [ \ ] ^ ` { | }" are allowed. | |
| Comma-separated list of additional unencoded characters that should be allowed in URI query strings. Only "< > [ \ ] ^ ` { | }" are allowed. | |
| Name of the HTTP header from which the remote host is extracted. |
|
| Regular expression that matches proxies that are to be trusted. |
|
| Name of the HTTP header used to override the original port value. |
|
| Header that holds the incoming protocol, usually named "X-Forwarded-Proto". | |
| Value of the protocol header indicating whether the incoming request uses SSL. |
|
| Name of the HTTP header from which the remote IP is extracted. For instance, 'X-FORWARDED-FOR'. | |
| Whether static resource caching is permitted for this web application. |
|
| Time-to-live of the static resource cache. | |
| Maximum amount of worker threads. |
|
| Minimum amount of worker threads. |
|
| Character encoding to use to decode the URI. |
|
| Whether HTTP 1.1 and later location headers generated by a call to sendRedirect will use relative or absolute redirects. |
|
| Undertow access log directory. | |
| Whether to enable the access log. |
|
| Format pattern for access logs. |
|
| Log file name prefix. |
|
| Whether to enable access log rotation. |
|
| Log file name suffix. |
|
| Whether the server should decode percent encoded slash characters. Enabling encoded slashes can have security implications due to different servers interpreting the slash differently. Only enable this if you have a legacy application that requires it. |
|
| Whether the 'Connection: keep-alive' header should be added to all responses, even if not required by the HTTP specification. |
|
| Size of each buffer. The default is derived from the maximum amount of memory that is available to the JVM. | |
| Whether the URL should be decoded. When disabled, percent-encoded characters in the URL will be left as-is. |
|
| Whether to allocate buffers outside the Java heap. The default is derived from the maximum amount of memory that is available to the JVM. | |
| Whether servlet filters should be initialized on startup. |
|
| Maximum number of cookies that are allowed. This limit exists to prevent hash collision based DOS attacks. |
|
| Maximum number of headers that are allowed. This limit exists to prevent hash collision based DOS attacks. | |
| Maximum size of the HTTP post content. When the value is -1, the default, the size is unlimited. |
|
| Maximum number of query or path parameters that are allowed. This limit exists to prevent hash collision based DOS attacks. | |
| Amount of time a connection can sit idle without processing a request, before it is closed by the server. | |
| Server options as defined in io.undertow.UndertowOptions. | |
| Socket options as defined in org.xnio.Options. | |
| Whether to preserve the path of a request when it is forwarded. |
|
| Number of I/O threads to create for the worker. The default is derived from the number of available processors. | |
| Number of worker threads. The default is 8 times the number of I/O threads. | |
| Charset used to decode URLs. |
|
.A.14. Actuator Properties
Name | Description | Default Value |
---|---|---|
| Whether to enable storage of audit events. |
|
| Whether to enable extended Cloud Foundry actuator endpoints. |
|
| Whether to skip SSL verification for Cloud Foundry actuator endpoint security calls. |
|
| Maximum time that a response can be cached. |
|
| Whether to enable the auditevents endpoint. |
|
| Maximum time that a response can be cached. |
|
| Whether to enable the beans endpoint. |
|
| Maximum time that a response can be cached. |
|
| Whether to enable the caches endpoint. |
|
| Maximum time that a response can be cached. |
|
| Whether to enable the conditions endpoint. |
|
| Keys that should be sanitized in addition to those already configured. Keys can be simple strings that the property ends with or regular expressions. | |
| Maximum time that a response can be cached. |
|
| Whether to enable the configprops endpoint. |
|
| Keys that should be sanitized. Keys can be simple strings that the property ends with or regular expressions. |
|
| Keys that should be sanitized in addition to those already configured. Keys can be simple strings that the property ends with or regular expressions. | |
| Maximum time that a response can be cached. |
|
| Whether to enable the env endpoint. |
|
| Keys that should be sanitized. Keys can be simple strings that the property ends with or regular expressions. |
|
| Maximum time that a response can be cached. |
|
| Whether to enable the flyway endpoint. |
|
| Maximum time that a response can be cached. |
|
| Whether to enable the health endpoint. |
|
| Health endpoint groups. | |
| Threshold after which a warning will be logged for slow health indicators. |
|
| Whether to make the liveness and readiness health groups available on the main server port. |
|
| Whether to enable liveness and readiness probes. |
|
| Roles used to determine whether a user is authorized to be shown details. When empty, all authenticated users are authorized. | |
| When to show components. If not specified the 'show-details' setting will be used. | |
| When to show full health details. |
|
| Mapping of health statuses to HTTP status codes. By default, registered health statuses map to sensible defaults (for example, UP maps to 200). | |
| Comma-separated list of health statuses in order of severity. |
|
| Maximum time that a response can be cached. |
|
| Whether to enable the heapdump endpoint. |
|
| Maximum time that a response can be cached. |
|
| Whether to enable the httptrace endpoint. |
|
| Maximum time that a response can be cached. |
|
| Whether to enable the info endpoint. |
|
| Maximum time that a response can be cached. |
|
| Whether to enable the integrationgraph endpoint. |
|
| Jolokia settings. Refer to the documentation of Jolokia for more details. | |
| Whether to enable the jolokia endpoint. |
|
| Maximum time that a response can be cached. |
|
| Whether to enable the liquibase endpoint. |
|
| Maximum time that a response can be cached. |
|
| Whether to enable the logfile endpoint. |
|
| External Logfile to be accessed. Can be used if the logfile is written by output redirect and not by the logging system itself. | |
| Maximum time that a response can be cached. |
|
| Whether to enable the loggers endpoint. |
|
| Maximum time that a response can be cached. |
|
| Whether to enable the mappings endpoint. |
|
| Maximum time that a response can be cached. |
|
| Whether to enable the metrics endpoint. |
|
| Whether to enable the prometheus endpoint. |
|
| Maximum time that a response can be cached. |
|
| Whether to enable the quartz endpoint. |
|
| Maximum time that a response can be cached. |
|
| Whether to enable the scheduledtasks endpoint. |
|
| Whether to enable the sessions endpoint. |
|
| Whether to enable the shutdown endpoint. |
|
| Maximum time that a response can be cached. |
|
| Whether to enable the startup endpoint. |
|
| Maximum time that a response can be cached. |
|
| Whether to enable the threaddump endpoint. |
|
| Whether to enable or disable all endpoints by default. | |
| Endpoints JMX domain name. Fallback to 'spring.jmx.default-domain' if set. |
|
| Endpoint IDs that should be excluded or '*' for all. | |
| Endpoint IDs that should be included or '*' for all. |
|
| Additional static properties to append to all ObjectNames of MBeans representing Endpoints. | |
| Whether to transparently migrate legacy endpoint IDs. |
|
| Base path for Web endpoints. Relative to the servlet context path (server.servlet.context-path) or WebFlux base path (spring.webflux.base-path) when the management server is sharing the main server port. Relative to the management server base path (management.server.base-path) when a separate management server port (management.server.port) is configured. |
|
| Whether credentials are supported. When not set, credentials are not supported. | |
| Comma-separated list of headers to allow in a request. '*' allows all headers. | |
| Comma-separated list of methods to allow. '*' allows all methods. When not set, defaults to GET. | |
| Comma-separated list of origin patterns to allow. Unlike allowed origins which only supports '*', origin patterns are more flexible (for example 'https://*.example.com') and can be used when credentials are allowed. When no allowed origin patterns or allowed origins are set, CORS support is disabled. | |
| Comma-separated list of origins to allow. '*' allows all origins. When credentials are allowed, '*' cannot be used and origin patterns should be configured instead. When no allowed origins or allowed origin patterns are set, CORS support is disabled. | |
| Comma-separated list of headers to include in a response. | |
| How long the response from a pre-flight request can be cached by clients. If a duration suffix is not specified, seconds will be used. |
|
| Whether the discovery page is enabled. |
|
| Endpoint IDs that should be excluded or '*' for all. | |
| Endpoint IDs that should be included or '*' for all. |
|
| Mapping between endpoint IDs and the path that should expose them. | |
| Whether to enable Cassandra health check. |
|
| Whether to enable Couchbase health check. |
|
| Whether to enable database health check. |
|
| Whether to ignore AbstractRoutingDataSources when creating database health indicators. |
|
| Whether to enable default health indicators. |
|
| Whether to enable disk space health check. |
|
| Path used to compute the available disk space. | |
| Minimum disk space that should be available. |
|
| Whether to enable Elasticsearch health check. |
|
| Whether to enable InfluxDB health check. |
|
| Whether to enable JMS health check. |
|
| Whether to enable LDAP health check. |
|
| Whether to enable liveness state health check. |
|
| Whether to enable Mail health check. |
|
| Whether to enable MongoDB health check. |
|
| Whether to enable Neo4j health check. |
|
| Whether to enable ping health check. |
|
| Whether to enable RabbitMQ health check. |
|
| Whether to enable readiness state health check. |
|
| Whether to enable Redis health check. |
|
| Whether to enable Solr health check. |
|
| Whether to enable build info. |
|
| Whether to enable default info contributors. |
|
| Whether to enable environment info. |
|
| Whether to enable git info. |
|
| Mode to use to expose git information. |
|
| Whether to enable Java info. |
|
| Whether to enable Operating System info. |
|
| Whether to enable auto-timing. |
|
| Percentiles for which additional time series should be published. | |
| Whether to publish percentile histrograms. |
|
| Name of the metric for sent requests. |
|
| Number of histograms for meter IDs starting with the specified name to keep in the ring buffer. The longest match wins, the key `all` can also be used to configure all meters. | |
| Maximum amount of time that samples for meter IDs starting with the specified name are accumulated to decaying distribution statistics before they are reset and rotated. The longest match wins, the key `all` can also be used to configure all meters. | |
| Maximum value that meter IDs starting with the specified name are expected to observe. The longest match wins. Values can be specified as a double or as a Duration value (for timer meters, defaulting to ms if no unit specified). | |
| Minimum value that meter IDs starting with the specified name are expected to observe. The longest match wins. Values can be specified as a double or as a Duration value (for timer meters, defaulting to ms if no unit specified). | |
| Specific computed non-aggregable percentiles to ship to the backend for meter IDs starting-with the specified name. The longest match wins, the key 'all' can also be used to configure all meters. | |
| Whether meter IDs starting with the specified name should publish percentile histograms. For monitoring systems that support aggregable percentile calculation based on a histogram, this can be set to true. For other systems, this has no effect. The longest match wins, the key 'all' can also be used to configure all meters. | |
| Specific service-level objective boundaries for meter IDs starting with the specified name. The longest match wins. Counters will be published for each specified boundary. Values can be specified as a double or as a Duration value (for timer meters, defaulting to ms if no unit specified). | |
| Whether meter IDs starting with the specified name should be enabled. The longest match wins, the key 'all' can also be used to configure all meters. | |
| AppOptics API token. | |
| Number of measurements per request to use for this backend. If more measurements are found, then multiple requests will be made. |
|
| Connection timeout for requests to this backend. |
|
| Whether exporting of metrics to this backend is enabled. |
|
| Whether to ship a floored time, useful when sending measurements from multiple hosts to align them on a given time boundary. |
|
| Tag that will be mapped to "@host" when shipping metrics to AppOptics. |
|
| Read timeout for requests to this backend. |
|
| Step size (i.e. reporting frequency) to use. |
|
| URI to ship metrics to. |
|
| Number of measurements per request to use for this backend. If more measurements are found, then multiple requests will be made. |
|
| Frequency for refreshing config settings from the LWC service. |
|
| Time to live for subscriptions from the LWC service. |
|
| URI for the Atlas LWC endpoint to retrieve current subscriptions. |
|
| Connection timeout for requests to this backend. |
|
| Whether exporting of metrics to this backend is enabled. |
|
| URI for the Atlas LWC endpoint to evaluate the data for a subscription. |
|
| Whether to enable streaming to Atlas LWC. |
|
| Time to live for meters that do not have any activity. After this period the meter will be considered expired and will not get reported. |
|
| Number of threads to use with the metrics publishing scheduler. |
|
| Read timeout for requests to this backend. |
|
| Step size (i.e. reporting frequency) to use. |
|
| URI of the Atlas server. |
|
| Datadog API key. | |
| Datadog application key. Not strictly required, but improves the Datadog experience by sending meter descriptions, types, and base units to Datadog. | |
| Number of measurements per request to use for this backend. If more measurements are found, then multiple requests will be made. |
|
| Connection timeout for requests to this backend. |
|
| Whether to publish descriptions metadata to Datadog. Turn this off to minimize the amount of metadata sent. |
|
| Whether exporting of metrics to this backend is enabled. |
|
| Tag that will be mapped to "host" when shipping metrics to Datadog. |
|
| Read timeout for requests to this backend. |
|
| Step size (i.e. reporting frequency) to use. |
|
| URI to ship metrics to. Set this if you need to publish metrics to a Datadog site other than US, or to an internal proxy en-route to Datadog. |
|
| Whether to enable default metrics exporters. |
|
| Dynatrace authentication token. | |
| Number of measurements per request to use for this backend. If more measurements are found, then multiple requests will be made. |
|
| Connection timeout for requests to this backend. |
|
| Whether exporting of metrics to this backend is enabled. |
|
| Read timeout for requests to this backend. |
|
| Step size (i.e. reporting frequency) to use. |
|
| URI to ship metrics to. Should be used for SaaS, self-managed instances or to en-route through an internal proxy. | |
| ID of the custom device that is exporting metrics to Dynatrace. | |
| Group for exported metrics. Used to specify custom device group name in the Dynatrace UI. | |
| Technology type for exported metrics. Used to group metrics under a logical technology name in the Dynatrace UI. |
|
| Default dimensions that are added to all metrics in the form of key-value pairs. These are overwritten by Micrometer tags if they use the same key. | |
| Whether to enable Dynatrace metadata export. |
|
| Prefix string that is added to all exported metrics. | |
| Whether to fall back to the built-in micrometer instruments for Timer and DistributionSummary. |
|
| Base64-encoded credentials string. Mutually exclusive with user-name and password. | |
| Whether to create the index automatically if it does not exist. |
|
| Number of measurements per request to use for this backend. If more measurements are found, then multiple requests will be made. |
|
| Connection timeout for requests to this backend. |
|
| Whether exporting of metrics to this backend is enabled. |
|
| Host to export metrics to. |
|
| Index to export metrics to. |
|
| Index date format used for rolling indices. Appended to the index name. |
|
| Prefix to separate the index name from the date format used for rolling indices. |
|
| Login password of the Elastic server. Mutually exclusive with api-key-credentials. | |
| Ingest pipeline name. By default, events are not pre-processed. | |
| Read timeout for requests to this backend. |
|
| Step size (i.e. reporting frequency) to use. |
|
| Name of the timestamp field. |
|
| Login user of the Elastic server. Mutually exclusive with api-key-credentials. | |
| UDP addressing mode, either unicast or multicast. |
|
| Base time unit used to report durations. |
|
| Whether exporting of metrics to Ganglia is enabled. |
|
| Host of the Ganglia server to receive exported metrics. |
|
| Port of the Ganglia server to receive exported metrics. |
|
| Step size (i.e. reporting frequency) to use. |
|
| Time to live for metrics on Ganglia. Set the multicast Time-To-Live to be one greater than the number of hops (routers) between the hosts. |
|
| Base time unit used to report durations. |
|
| Whether exporting of metrics to Graphite is enabled. |
|
| Whether Graphite tags should be used, as opposed to a hierarchical naming convention. Enabled by default unless "tagsAsPrefix" is set. | |
| Host of the Graphite server to receive exported metrics. |
|
| Port of the Graphite server to receive exported metrics. |
|
| Protocol to use while shipping data to Graphite. |
|
| Base time unit used to report rates. |
|
| Step size (i.e. reporting frequency) to use. |
|
| For the hierarchical naming convention, turn the specified tag keys into part of the metric prefix. Ignored if "graphiteTagsEnabled" is true. |
|
| Humio API token. | |
| Number of measurements per request to use for this backend. If more measurements are found, then multiple requests will be made. |
|
| Connection timeout for requests to this backend. |
|
| Whether exporting of metrics to this backend is enabled. |
|
| Read timeout for requests to this backend. |
|
| Step size (i.e. reporting frequency) to use. |
|
| Humio tags describing the data source in which metrics will be stored. Humio tags are a distinct concept from Micrometer's tags. Micrometer's tags are used to divide metrics along dimensional boundaries. | |
| URI to ship metrics to. If you need to publish metrics to an internal proxy en-route to Humio, you can define the location of the proxy with this. |
|
| API version of InfluxDB to use. Defaults to 'v1' unless an org is configured. If an org is configured, defaults to 'v2'. | |
| Whether to create the Influx database if it does not exist before attempting to publish metrics to it. InfluxDB v1 only. |
|
| Number of measurements per request to use for this backend. If more measurements are found, then multiple requests will be made. |
|
| Bucket for metrics. Use either the bucket name or ID. Defaults to the value of the db property if not set. InfluxDB v2 only. | |
| Whether to enable GZIP compression of metrics batches published to Influx. |
|
| Connection timeout for requests to this backend. |
|
| Write consistency for each point. |
|
| Database to send metrics to. InfluxDB v1 only. |
|
| Whether exporting of metrics to this backend is enabled. |
|
| Org to write metrics to. InfluxDB v2 only. | |
| Login password of the Influx server. InfluxDB v1 only. | |
| Read timeout for requests to this backend. |
|
| Time period for which Influx should retain data in the current database. For instance 7d, check the influx documentation for more details on the duration format. InfluxDB v1 only. | |
| Retention policy to use (Influx writes to the DEFAULT retention policy if one is not specified). InfluxDB v1 only. | |
| How many copies of the data are stored in the cluster. Must be 1 for a single node instance. InfluxDB v1 only. | |
| Time range covered by a shard group. For instance 2w, check the influx documentation for more details on the duration format. InfluxDB v1 only. | |
| Step size (i.e. reporting frequency) to use. |
|
| Authentication token to use with calls to the InfluxDB backend. For InfluxDB v1, the Bearer scheme is used. For v2, the Token scheme is used. | |
| URI of the Influx server. |
|
| Login user of the Influx server. InfluxDB v1 only. | |
| Metrics JMX domain name. |
|
| Whether exporting of metrics to this backend is enabled. |
|
| Step size (i.e. reporting frequency) to use. |
|
| Number of measurements per request to use for this backend. If more measurements are found, then multiple requests will be made. |
|
| Connection timeout for requests to this backend. |
|
| Whether exporting of metrics to this backend is enabled. |
|
| Login password of the KairosDB server. | |
| Read timeout for requests to this backend. |
|
| Step size (i.e. reporting frequency) to use. |
|
| URI of the KairosDB server. |
|
| Login user of the KairosDB server. | |
| New Relic account ID. | |
| New Relic API key. | |
| Number of measurements per request to use for this backend. If more measurements are found, then multiple requests will be made. |
|
| Client provider type to use. | |
| Connection timeout for requests to this backend. |
|
| Whether exporting of metrics to this backend is enabled. |
|
| The event type that should be published. This property will be ignored if 'meter-name-event-type-enabled' is set to 'true'. |
|
| Whether to send the meter name as the event type instead of using the 'event-type' configuration property value. Can be set to 'true' if New Relic guidelines are not being followed or event types consistent with previous Spring Boot releases are required. |
|
| Read timeout for requests to this backend. |
|
| Step size (i.e. reporting frequency) to use. |
|
| URI to ship metrics to. |
|
| Whether to enable publishing descriptions as part of the scrape payload to Prometheus. Turn this off to minimize the amount of data sent on each scrape. |
|
| Whether exporting of metrics to this backend is enabled. |
|
| Histogram type for backing DistributionSummary and Timer. |
|
| Base URL for the Pushgateway. |
|
| Enable publishing via a Prometheus Pushgateway. |
|
| Grouping key for the pushed metrics. | |
| Job identifier for this application instance. | |
| Login password of the Prometheus Pushgateway. | |
| Frequency with which to push metrics. |
|
| Operation that should be performed on shutdown. |
|
| Login user of the Prometheus Pushgateway. | |
| Step size (i.e. reporting frequency) to use. |
|
| SignalFX access token. | |
| Number of measurements per request to use for this backend. If more measurements are found, then multiple requests will be made. |
|
| Connection timeout for requests to this backend. |
|
| Whether exporting of metrics to this backend is enabled. |
|
| Read timeout for requests to this backend. |
|
| Uniquely identifies the app instance that is publishing metrics to SignalFx. Defaults to the local host name. | |
| Step size (i.e. reporting frequency) to use. |
|
| URI to ship metrics to. |
|
| Whether exporting of metrics to this backend is enabled. |
|
| Counting mode. |
|
| Step size (i.e. reporting frequency) to use. |
|
| Number of measurements per request to use for this backend. If more measurements are found, then multiple requests will be made. |
|
| Connection timeout for requests to this backend. |
|
| Whether exporting of metrics to this backend is enabled. |
|
| Identifier of the Google Cloud project to monitor. | |
| Read timeout for requests to this backend. |
|
| Monitored resource's labels. | |
| Monitored resource type. |
|
| Step size (i.e. reporting frequency) to use. |
|
| Whether to use semantically correct metric types. When false, counter metrics are published as the GAUGE MetricKind. When true, counter metrics are published as the CUMULATIVE MetricKind. |
|
| Whether measurements should be buffered before sending to the StatsD server. |
|
| Whether exporting of metrics to StatsD is enabled. |
|
| StatsD line protocol to use. |
|
| Host of the StatsD server to receive exported metrics. |
|
| Total length of a single payload should be kept within your network's MTU. |
|
| How often gauges will be polled. When a gauge is polled, its value is recalculated and if the value has changed (or publishUnchangedMeters is true), it is sent to the StatsD server. |
|
| Port of the StatsD server to receive exported metrics. |
|
| Protocol of the StatsD server to receive exported metrics. |
|
| Whether to send unchanged meters to the StatsD server. |
|
| Step size to use in computing windowed statistics like max. To get the most out of these statistics, align the step interval to be close to your scrape interval. |
|
| API token used when publishing metrics directly to the Wavefront API host. | |
| Number of measurements per request to use for this backend. If more measurements are found, then multiple requests will be made. |
|
| Whether exporting of metrics to this backend is enabled. |
|
| Global prefix to separate metrics originating from this app's instrumentation from those originating from other Wavefront integrations when viewed in the Wavefront UI. | |
| Interval at which points are flushed to the Wavefront server. |
|
| Maximum queue size of the in-memory buffer. |
|
| Maximum message size, such that each batch is reported as one or more messages where no message exceeds the specified size. | |
| Unique identifier for the app instance that is the source of metrics being published to Wavefront. Defaults to the local host name. | |
| Step size (i.e. reporting frequency) to use. |
|
| URI to ship metrics to. |
|
| Whether to enable auto-timing. |
|
| Percentiles for which additional time series should be published. | |
| Whether to publish percentile histrograms. |
|
| Whether to enable Mongo client command metrics. |
|
| Whether to enable Mongo connection pool metrics. |
|
| Comma-separated list of paths to report disk metrics for. |
|
| Common tags that are applied to every meter. | |
| Whether auto-configured MeterRegistry implementations should be bound to the global static registry on Metrics. For testing, set this to 'false' to maximize test independence. |
|
| Maximum number of unique URI tag values allowed. After the max number of tag values is reached, metrics with additional tag values are denied by filter. |
|
| Whether to automatically time web client requests. |
|
| Computed non-aggregable percentiles to publish. | |
| Whether percentile histograms should be published. |
|
| Name of the metric for sent requests. |
|
| Maximum number of unique URI tag values allowed. After the max number of tag values is reached, metrics with additional tag values are denied by filter. |
|
| Whether to automatically time web server requests. |
|
| Computed non-aggregable percentiles to publish. | |
| Whether percentile histograms should be published. |
|
| Whether the trailing slash should be ignored when recording metrics. |
|
| Name of the metric for received requests. |
|
| Add the "X-Application-Context" HTTP header in each response. |
|
| Network address to which the management endpoints should bind. Requires a custom management.server.port. | |
| Management endpoint base path (for instance, '/management'). Requires a custom management.server.port. | |
| Management endpoint HTTP port (uses the same port as the application by default). Configure a different port to use management-specific SSL. | |
| Path to a PEM-encoded SSL certificate file. | |
| Path to a PEM-encoded private key file for the SSL certificate. | |
| Supported SSL ciphers. | |
| Client authentication mode. Requires a trust store. | |
| Whether to enable SSL support. |
|
| Enabled SSL protocols. | |
| Alias that identifies the key in the key store. | |
| Password used to access the key in the key store. | |
| Path to the key store that holds the SSL certificate (typically a jks file). | |
| Password used to access the key store. | |
| Provider for the key store. | |
| Type of the key store. | |
| SSL protocol to use. |
|
| Path to a PEM-encoded SSL certificate authority file. | |
| Path to a PEM-encoded private key file for the SSL certificate authority. | |
| Trust store that holds SSL certificates. | |
| Password used to access the trust store. | |
| Provider for the trust store. | |
| Type of the trust store. | |
| Whether to enable HTTP request-response tracing. |
|
| Items to be included in the trace. Defaults to request headers (excluding Authorization and Cookie), response headers (excluding Set-Cookie), and time taken. |
|
.A.15. Devtools Properties
Name | Description | Default Value |
---|---|---|
| Whether to enable development property defaults. |
|
| Whether to enable a livereload.com-compatible server. |
|
| Server port. |
|
| Context path used to handle the remote connection. |
|
| The host of the proxy to use to connect to the remote application. | |
| The port of the proxy to use to connect to the remote application. | |
| Whether to enable remote restart. |
|
| A shared secret required to establish a connection (required to enable remote support). | |
| HTTP header used to transfer the shared secret. |
|
| Additional patterns that should be excluded from triggering a full restart. | |
| Additional paths to watch for changes. | |
| Whether to enable automatic restart. |
|
| Patterns that should be excluded from triggering a full restart. |
|
| Whether to log the condition evaluation delta upon restart. |
|
| Amount of time to wait between polling for classpath changes. |
|
| Amount of quiet time required without any classpath changes before a restart is triggered. |
|
| Name of a specific file that, when changed, triggers the restart check. Must be a simple name (without any path) of a file that appears on your classpath. If not specified, any classpath file change triggers the restart. |
Appendix B: Configuration Metadata
Spring Boot jars include metadata files that provide details of all supported configuration properties. The files are designed to let IDE developers offer contextual help and “code completion” as users are working with application.properties
or application.yml
files.
The majority of the metadata file is generated automatically at compile time by processing all items annotated with @ConfigurationProperties
. However, it is possible to write part of the metadata manually for corner cases or more advanced use cases.
.B.1. Metadata Format
Configuration metadata files are located inside jars under META-INF/spring-configuration-metadata.json
. They use a JSON format with items categorized under either “groups” or “properties” and additional values hints categorized under "hints", as shown in the following example:
{"groups": [
{
"name": "server",
"type": "org.springframework.boot.autoconfigure.web.ServerProperties",
"sourceType": "org.springframework.boot.autoconfigure.web.ServerProperties"
},
{
"name": "spring.jpa.hibernate",
"type": "org.springframework.boot.autoconfigure.orm.jpa.JpaProperties$Hibernate",
"sourceType": "org.springframework.boot.autoconfigure.orm.jpa.JpaProperties",
"sourceMethod": "getHibernate()"
}
...
],"properties": [
{
"name": "server.port",
"type": "java.lang.Integer",
"sourceType": "org.springframework.boot.autoconfigure.web.ServerProperties"
},
{
"name": "server.address",
"type": "java.net.InetAddress",
"sourceType": "org.springframework.boot.autoconfigure.web.ServerProperties"
},
{
"name": "spring.jpa.hibernate.ddl-auto",
"type": "java.lang.String",
"description": "DDL mode. This is actually a shortcut for the \"hibernate.hbm2ddl.auto\" property.",
"sourceType": "org.springframework.boot.autoconfigure.orm.jpa.JpaProperties$Hibernate"
}
...
],"hints": [
{
"name": "spring.jpa.hibernate.ddl-auto",
"values": [
{
"value": "none",
"description": "Disable DDL handling."
},
{
"value": "validate",
"description": "Validate the schema, make no changes to the database."
},
{
"value": "update",
"description": "Update the schema if necessary."
},
{
"value": "create",
"description": "Create the schema and destroy previous data."
},
{
"value": "create-drop",
"description": "Create and then destroy the schema at the end of the session."
}
]
}
]}
Each “property” is a configuration item that the user specifies with
a given value. For example, server.port
and server.address
might be specified in your application.properties
/application.yaml
, as follows:
Properties
server.port=9090
server.address=127.0.0.1
Yaml
server:
port: 9090
address: 127.0.0.1
The “groups” are higher level items that do not themselves specify a value but instead provide a contextual grouping for properties. For example, the server.port
and server.address
properties are part of the server
group.
It is not required that every “property” has a “group”. Some properties might exist in their own right. |
Finally, “hints” are additional information used to assist the user in configuring a given property. For example, when a developer is configuring the spring.jpa.hibernate.ddl-auto
property, a tool can use the hints to offer some auto-completion help for the none
, validate
, update
, create
, and create-drop
values.
Group Attributes
The JSON object contained in the groups
array can contain the attributes shown in the following table:
Name | Type | Purpose |
---|---|---|
| String | The full name of the group. This attribute is mandatory. |
| String | The class name of the data type of the group. For example, if the group were based on a class annotated with |
| String | A short description of the group that can be displayed to users. If no description is available, it may be omitted. It is recommended that descriptions be short paragraphs, with the first line providing a concise summary. The last line in the description should end with a period ( |
| String | The class name of the source that contributed this group. For example, if the group were based on a |
| String | The full name of the method (include parenthesis and argument types) that contributed this group (for example, the name of a |
Property Attributes
The JSON object contained in the properties
array can contain the attributes described in the following table:
Name | Type | Purpose |
---|---|---|
| String | The full name of the property. Names are in lower-case period-separated form (for example, |
| String | The full signature of the data type of the property (for example, |
| String | A short description of the property that can be displayed to users. If no description is available, it may be omitted. It is recommended that descriptions be short paragraphs, with the first line providing a concise summary. The last line in the description should end with a period ( |
| String | The class name of the source that contributed this property. For example, if the property were from a class annotated with |
| Object | The default value, which is used if the property is not specified. If the type of the property is an array, it can be an array of value(s). If the default value is unknown, it may be omitted. |
| Deprecation | Specify whether the property is deprecated. If the field is not deprecated or if that information is not known, it may be omitted. The next table offers more detail about the |
The JSON object contained in the deprecation
attribute of each properties
element can contain the following attributes:
Name | Type | Purpose |
---|---|---|
| String | The level of deprecation, which can be either |
| String | A short description of the reason why the property was deprecated. If no reason is available, it may be omitted. It is recommended that descriptions be short paragraphs, with the first line providing a concise summary. The last line in the description should end with a period ( |
| String | The full name of the property that replaces this deprecated property. If there is no replacement for this property, it may be omitted. |
Prior to Spring Boot 1.3, a single deprecated boolean attribute can be used instead of the deprecation element. This is still supported in a deprecated fashion and should no longer be used. If no reason and replacement are available, an empty deprecation object should be set.
|
Deprecation can also be specified declaratively in code by adding the @DeprecatedConfigurationProperty
annotation to the getter exposing the deprecated property. For instance, assume that the my.app.target
property was confusing and was renamed to my.app.name
. The following example shows how to handle that situation:
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.context.properties.DeprecatedConfigurationProperty;
@ConfigurationProperties("my.app")
public class MyProperties {
private String name;
public String getName() {
return this.name;
}
public void setName(String name) {
this.name = name;
}
@Deprecated
@DeprecatedConfigurationProperty(replacement = "my.app.name")
public String getTarget() {
return this.name;
}
@Deprecated
public void setTarget(String target) {
this.name = target;
}
}
There is no way to set a level . warning is always assumed, since code is still handling the property.
|
The preceding code makes sure that the deprecated property still works (delegating to the name
property behind the scenes). Once the getTarget
and setTarget
methods can be removed from your public API, the automatic deprecation hint in the metadata goes away as well. If you want to keep a hint, adding manual metadata with an error
deprecation level ensures that users are still informed about that property. Doing so is particularly useful when a replacement
is
provided.
Hint Attributes
The JSON object contained in the hints
array can contain the attributes shown in the following table:
Name | Type | Purpose |
---|---|---|
| String | The full name of the property to which this hint refers. Names are in lower-case period-separated form (such as |
| ValueHint[] | A list of valid values as defined by the |
| ValueProvider[] | A list of providers as defined by the |
The JSON object contained in the values
attribute of each hint
element can contain the attributes described in the following table:
Name | Type | Purpose |
---|---|---|
| Object | A valid value for the element to which the hint refers. If the type of the property is an array, it can also be an array of value(s). This attribute is mandatory. |
| String | A short description of the value that can be displayed to users. If no description is available, it may be omitted. It is recommended that descriptions be short paragraphs, with the first line providing a concise summary. The last line in the description should end with a period ( |
The JSON object contained in the providers
attribute of each hint
element can contain the attributes described in the following table:
Name | Type | Purpose |
---|---|---|
| String | The name of the provider to use to offer additional content assistance for the element to which the hint refers. |
| JSON object | Any additional parameter that the provider supports (check the documentation of the provider for more details). |
Repeated Metadata Items
Objects with the same “property” and “group” name can appear multiple times within a metadata file. For example, you could bind two separate classes to the same prefix, with each having potentially overlapping property names. While the same names appearing in the metadata multiple times should not be common, consumers of metadata should take care to ensure that they support it.
.B.2. Providing Manual Hints
To improve the user experience and further assist the user in configuring a given property, you can provide additional metadata that:
Describes the list of potential values for a property.
Associates a provider, to attach a well defined semantic to a property, so that a tool can discover the list of potential values based on the project’s context.
Value Hint
The name
attribute of each hint refers to the name
of a property. In the initial example shown earlier, we provide five values for the spring.jpa.hibernate.ddl-auto
property: none
, validate
,
update
, create
, and create-drop
. Each value may have a description as well.
If your property is of type Map
, you can provide hints for both the keys and the values (but not for the map itself). The special .keys
and .values
suffixes must refer to the keys and the values, respectively.
Assume a my.contexts
maps magic String
values to an integer, as shown in the following example:
import java.util.Map;
import org.springframework.boot.context.properties.ConfigurationProperties;
@ConfigurationProperties("my")
public class MyProperties {
private Map<String, Integer> contexts;
// getters/setters ...
public Map<String, Integer> getContexts() {
return this.contexts;
}
public void setContexts(Map<String, Integer> contexts) {
this.contexts = contexts;
}
}
The magic values are (in this example) are sample1
and
sample2
. In order to offer additional content assistance for the keys, you could add the following JSON to the manual metadata of the module:
{"hints": [
{
"name": "my.contexts.keys",
"values": [
{
"value": "sample1"
},
{
"value": "sample2"
}
]
}
]}
We recommend that you use an Enum for those two values instead. If your IDE supports it, this is by far the most effective approach to auto-completion.
|
Value Providers
Providers are a powerful way to attach semantics to a property. In this section, we define the official providers that you can use for your own hints. However, your favorite IDE may implement some of these or none of them. Also, it could eventually provide its own.
As this is a new feature, IDE vendors must catch up with how it works. Adoption times naturally vary. |
The following table summarizes the list of supported providers:
Name | Description |
---|---|
| Permits any additional value to be provided. |
| Auto-completes the classes available in the project. Usually constrained by a base class that is specified by the |
| Handles the property as if it were defined by the type defined by the mandatory |
| Auto-completes valid logger names and logger groups. Typically, package and class names available in the current project can be auto-completed as well as defined groups. |
| Auto-completes the available bean names in the current project. Usually constrained by a base class that is specified by the |
| Auto-completes the available Spring profile names in the project. |
Only one provider can be active for a given property, but you can specify several providers if they can all manage the property in some way. Make sure to place the most powerful provider first, as the IDE must use the first one in the JSON section that it can handle. If no provider for a given property is supported, no special content assistance is provided, either. |
Any
The special any provider value permits any additional values to be provided. Regular value validation based on the property type should be applied if this is supported.
This provider is typically used if you have a list of values and any extra values should still be considered as valid.
The following example offers on
and off
as auto-completion values for system.state
:
{"hints": [
{
"name": "system.state",
"values": [
{
"value": "on"
},
{
"value": "off"
}
],
"providers": [
{
"name": "any"
}
]
}
]}
Note that, in the preceding example, any other value is also allowed.
Class Reference
The class-reference provider auto-completes classes available in the project. This provider supports the following parameters:
Parameter | Type | Default value | Description |
---|---|---|---|
|
| none | The fully qualified name of the class that should be assignable to the chosen value. Typically used to filter out-non candidate classes. Note that this information can be provided by the type itself by exposing a class with the appropriate upper bound. |
|
| true | Specify whether only concrete classes are to be considered as valid candidates. |
The following metadata snippet corresponds to the standard server.servlet.jsp.class-name
property that defines the JspServlet
class name to use:
{"hints": [
{
"name": "server.servlet.jsp.class-name",
"providers": [
{
"name": "class-reference",
"parameters": {
"target": "javax.servlet.http.HttpServlet"
}
}
]
}
]}
Handle As
The handle-as provider lets you
substitute the type of the property to a more high-level type. This typically happens when the property has a java.lang.String
type, because you do not want your configuration classes to rely on classes that may not be on the classpath. This provider supports the following parameters:
Parameter | Type | Default value | Description |
---|---|---|---|
|
| none | The fully qualified name of the type to consider for the property. This parameter is mandatory. |
The following types can be used:
Any
java.lang.Enum
: Lists the possible values for the property. (We recommend defining the property with theEnum
type, as no further hint should be required for the IDE to auto-complete the values)java.nio.charset.Charset
: Supports auto-completion of charset/encoding values (such asUTF-8
)java.util.Locale
: auto-completion of locales (such asen_US
)org.springframework.util.MimeType
: Supports auto-completion of content type values (such astext/plain
)org.springframework.core.io.Resource
: Supports auto-completion of Spring’s Resource abstraction to refer to a file on the filesystem or on the classpath (such asclasspath:/sample.properties
)
If multiple values can be provided, use a Collection or Array type to teach the IDE about it.
|
The following metadata snippet corresponds to the standard spring.liquibase.change-log
property that defines the path to the changelog to use. It is actually used internally as a org.springframework.core.io.Resource
but cannot be exposed as such, because we need to keep the original String value to pass it to the Liquibase API.
{"hints": [
{
"name": "spring.liquibase.change-log",
"providers": [
{
"name": "handle-as",
"parameters": {
"target": "org.springframework.core.io.Resource"
}
}
]
}
]}
Logger Name
The logger-name provider auto-completes valid logger names and logger groups. Typically, package and class names available in the current project can be auto-completed. If groups are enabled (default) and if a custom logger group is identified in the configuration, auto-completion for it should be provided. Specific frameworks may have extra magic logger names that can be supported as well.
This provider supports the following parameters:
Parameter | Type | Default value | Description |
---|---|---|---|
|
|
| Specify whether known groups should be considered. |
Since a logger name can be any arbitrary name, this provider should allow any value but could highlight valid package and class names that are not available in the project’s classpath.
The following metadata snippet corresponds to the standard logging.level
property. Keys are logger names, and values correspond to the standard log levels or any custom level. As Spring Boot defines a few logger groups out-of-the-box, dedicated value hints have been added for
those.
{"hints": [
{
"name": "logging.level.keys",
"values": [
{
"value": "root",
"description": "Root logger used to assign the default logging level."
},
{
"value": "sql",
"description": "SQL logging group including Hibernate SQL logger."
},
{
"value": "web",
"description": "Web logging group including codecs."
}
],
"providers": [
{
"name": "logger-name"
}
]
},
{
"name": "logging.level.values",
"values": [
{
"value": "trace"
},
{
"value": "debug"
},
{
"value": "info"
},
{
"value": "warn"
},
{
"value": "error"
},
{
"value": "fatal"
},
{
"value": "off"
}
],
"providers": [
{
"name": "any"
}
]
}
]}
Spring Bean Reference
The spring-bean-reference provider auto-completes the beans that are defined in the configuration of the current project. This provider supports the following parameters:
Parameter | Type | Default value | Description |
---|---|---|---|
|
| none | The fully qualified name of the bean class that should be assignable to the candidate. Typically used to filter out non-candidate beans. |
The following metadata snippet corresponds to the standard spring.jmx.server
property that defines the name of the MBeanServer
bean to use:
{"hints": [
{
"name": "spring.jmx.server",
"providers": [
{
"name": "spring-bean-reference",
"parameters": {
"target": "javax.management.MBeanServer"
}
}
]
}
]}
The binder is not aware of the metadata. If you provide that hint, you still need to transform the bean name into an actual Bean reference using by the ApplicationContext .
|
Spring Profile Name
The spring-profile-name provider auto-completes the Spring profiles that are defined in the configuration of the current project.
The following
metadata snippet corresponds to the standard spring.profiles.active
property that defines the name of the Spring profile(s) to enable:
{"hints": [
{
"name": "spring.profiles.active",
"providers": [
{
"name": "spring-profile-name"
}
]
}
]}
.B.3. Generating Your Own Metadata by Using the Annotation Processor
You can easily generate your own
configuration metadata file from items annotated with @ConfigurationProperties
by using the spring-boot-configuration-processor
jar. The jar includes a Java annotation processor which is invoked as your project is compiled.
Configuring the Annotation Processor
To use the processor,
include a dependency on spring-boot-configuration-processor
.
With Maven the dependency should be declared as optional, as shown in the following example:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-configuration-processor</artifactId>
<optional>true</optional>
</dependency>
With Gradle, the dependency should be declared in the annotationProcessor
configuration, as shown in the following example:
dependencies {
annotationProcessor "org.springframework.boot:spring-boot-configuration-processor"
}
If you are using an additional-spring-configuration-metadata.json
file, the compileJava
task should be configured to depend on the processResources
task, as shown in the following example:
tasks.named('compileJava') {
inputs.files(tasks.named('processResources'))
}
This dependency ensures that the additional metadata is available when the annotation processor runs during compilation.
If you are using AspectJ in your project, you need to make sure that the annotation processor runs only once. There are several ways to do this. With Maven, you can configure the
|
If you are using Lombok in your project, you need to make sure that its annotation processor runs before |
Automatic Metadata Generation
The processor picks up both classes and methods that are annotated with @ConfigurationProperties
.
If the class is also annotated with @ConstructorBinding
, a single
constructor is expected and one property is created per constructor parameter. Otherwise, properties are discovered through the presence of standard getters and setters with special handling for collection and map types (that is detected even if only a getter is present). The annotation processor also supports the use of the @Data
, @Value
, @Getter
, and @Setter
lombok annotations.
Consider the following example:
import org.springframework.boot.context.properties.ConfigurationProperties;
@ConfigurationProperties(prefix = "my.server")
public class MyServerProperties {
/**
* Name of the server.
*/
private String name;
/**
* IP address to listen to.
*/
private String ip = "127.0.0.1";
/**
* Port to listener to.
*/
private int port = 9797;
// getters/setters ...
public String getName() {
return this.name;
}
public void setName(String name) {
this.name = name;
}
public String getIp() {
return this.ip;
}
public void setIp(String ip) {
this.ip = ip;
}
public int getPort() {
return this.port;
}
public void setPort(int port) {
this.port = port;
}
// fold:off
This exposes three properties where my.server.name
has
no default and my.server.ip
and my.server.port
defaults to "127.0.0.1"
and 9797
respectively. The Javadoc on fields is used to populate the description
attribute. For instance, the description of my.server.ip
is "IP address to listen to.".
You should only use plain text with @ConfigurationProperties field Javadoc, since they are not processed before being added to the JSON.
|
The annotation processor applies a number of heuristics to extract the default value from the source model. Default values have to be provided statically. In particular, do not refer to a constant defined in another class. Also, the annotation processor cannot auto-detect default values for Enum
s and Collections
s.
For cases where the default value could not be detected, manual metadata should be provided. Consider the following example:
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import org.springframework.boot.context.properties.ConfigurationProperties;
@ConfigurationProperties(prefix = "my.messaging")
public class MyMessagingProperties {
private List<String> addresses = new ArrayList<>(Arrays.asList("a", "b"));
private ContainerType containerType = ContainerType.SIMPLE;
// getters/setters ...
public List<String> getAddresses() {
return this.addresses;
}
public void setAddresses(List<String> addresses) {
this.addresses = addresses;
}
public ContainerType getContainerType() {
return this.containerType;
}
public void setContainerType(ContainerType containerType) {
this.containerType = containerType;
}
public enum ContainerType {
SIMPLE, DIRECT
}
}
In order to document default values for properties in the class above, you could add the following content to the manual metadata of the module:
{"properties": [
{
"name": "my.messaging.addresses",
"defaultValue": ["a", "b"]
},
{
"name": "my.messaging.container-type",
"defaultValue": "simple"
}
]}
Only the name of the property is required to document additional metadata for existing properties.
|
Nested Properties
The annotation processor automatically considers inner classes as nested properties. Rather than documenting the ip
and port
at the root of the namespace, we could create a sub-namespace for it. Consider the updated example:
import org.springframework.boot.context.properties.ConfigurationProperties;
@ConfigurationProperties(prefix = "my.server")
public class MyServerProperties {
private String name;
private Host host;
// getters/setters ...
public String getName() {
return this.name;
}
public void setName(String name) {
this.name = name;
}
public Host getHost() {
return this.host;
}
public void setHost(Host host) {
this.host = host;
}
public static class Host {
private String ip;
private int port;
// getters/setters ...
public String getIp() {
return this.ip;
}
public void setIp(String ip) {
this.ip = ip;
}
public int getPort() {
return this.port;
}
public void setPort(int port) {
this.port = port;
}
}
}
The preceding example produces metadata information for my.server.name
, my.server.host.ip
, and my.server.host.port
properties. You can use the @NestedConfigurationProperty
annotation on a field to indicate that a regular (non-inner) class should be treated as if it were nested.
This has no effect on collections and maps, as those types are automatically identified, and a single metadata property is generated for each of them. |
Adding Additional Metadata
Spring Boot’s configuration file handling is quite flexible, and it is often the case that properties may exist that are not bound to a @ConfigurationProperties
bean. You
may also need to tune some attributes of an existing key. To support such cases and let you provide custom "hints", the annotation processor automatically merges items from META-INF/additional-spring-configuration-metadata.json
into the main metadata file.
If you refer to a property that has been detected automatically, the description, default value, and deprecation information are overridden, if specified. If the manual property declaration is not identified in the current module, it is added as a new property.
The format of
the additional-spring-configuration-metadata.json
file is exactly the same as the regular spring-configuration-metadata.json
. The additional properties file is optional. If you do not have any additional properties, do not add the file.
Appendix C: Auto-configuration Classes
This appendix contains details of all of the auto-configuration classes
provided by Spring Boot, with links to documentation and source code. Remember to also look at the conditions report in your application for more details of which features are switched on. (To do so, start the app with --debug
or -Ddebug
or, in an Actuator application, use the conditions
endpoint).
Appendix D: Test Auto-configuration Annotations
This appendix describes the @…Test
auto-configuration annotations that Spring Boot provides to test slices of your application.
.D.1. Test Slices
The following table lists the various @…Test
annotations that can be used to test slices of your
application and the auto-configuration that they import by default:
Test slice | Imported auto-configuration |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Appendix E: The Executable Jar Format
The spring-boot-loader
modules lets Spring Boot support executable jar and war files. If you use the Maven plugin or the Gradle plugin, executable jars are automatically generated, and you generally do not need to know the details of how they work.
If you need to create executable jars from a different build system or if you are just curious about the underlying technology, this appendix provides some background.
.E.1. Nested JARs
Java does not provide any standard way to load nested jar files (that is, jar files that are themselves contained within a jar). This can be problematic if you need to distribute a self-contained application that can be run from the command line without unpacking.
To solve this problem, many developers use “shaded” jars. A shaded jar packages all classes, from all jars, into a single “uber jar”. The problem with shaded jars is that it becomes hard to see which libraries are actually in your application. It can also be problematic if the same filename is used (but with different content) in multiple jars. Spring Boot takes a different approach and lets you actually nest jars directly.
The Executable Jar File Structure
Spring Boot Loader-compatible jar files should be structured in the following way:
example.jar | +-META-INF | +-MANIFEST.MF +-org | +-springframework | +-boot | +-loader | +-<spring boot loader classes> +-BOOT-INF +-classes | +-mycompany | +-project | +-YourClasses.class +-lib +-dependency1.jar +-dependency2.jar
Application classes should be placed in a nested
BOOT-INF/classes
directory. Dependencies should be placed in a nested BOOT-INF/lib
directory.
The Executable War File Structure
Spring Boot Loader-compatible war files should be structured in the following way:
example.war | +-META-INF | +-MANIFEST.MF +-org | +-springframework | +-boot | +-loader | +-<spring boot loader classes> +-WEB-INF +-classes | +-com | +-mycompany | +-project | +-YourClasses.class +-lib | +-dependency1.jar | +-dependency2.jar +-lib-provided +-servlet-api.jar +-dependency3.jar
Dependencies should be
placed in a nested WEB-INF/lib
directory. Any dependencies that are required when running embedded but are not required when deploying to a traditional web container should be placed in WEB-INF/lib-provided
.
Index Files
Spring Boot Loader-compatible jar and war archives can include
additional index files under the BOOT-INF/
directory. A classpath.idx
file can be provided for both jars and wars, and it provides the ordering that jars should be added to the classpath. The layers.idx
file can be used only for jars, and it allows a jar to be split into logical layers for Docker/OCI image creation.
Index files follow a YAML compatible syntax so that they can be easily parsed by third-party tools. These files, however, are not parsed internally as YAML and they must be written in exactly the formats described below in order to be used.
Classpath Index
The classpath index file can be provided in BOOT-INF/classpath.idx
. It provides a list of jar names (including the directory) in the order that they should be added to the classpath. Each line
must start with dash space ("-·"
) and names must be in double quotes.
For example, given the following jar:
example.jar | +-META-INF | +-... +-BOOT-INF +-classes | +... +-lib +-dependency1.jar +-dependency2.jar
The index file would look like this:
- "BOOT-INF/lib/dependency2.jar" - "BOOT-INF/lib/dependency1.jar"
Layer Index
The layers index file can be provided in BOOT-INF/layers.idx
. It provides a
list of layers and the parts of the jar that should be contained within them. Layers are written in the order that they should be added to the Docker/OCI image. Layers names are written as quoted strings prefixed with dash space ("-·"
) and with a colon (":"
) suffix. Layer content is either a file or directory name written as a quoted string prefixed by space space dash space ("··-·"
). A directory name ends with /
, a file name does not. When a directory name is used it means
that all files inside that directory are in the same layer.
A typical example of a layers index would be:
- "dependencies": - "BOOT-INF/lib/dependency1.jar" - "BOOT-INF/lib/dependency2.jar" - "application": - "BOOT-INF/classes/" - "META-INF/"
.E.2. Spring Boot’s “JarFile” Class
The core class used to support loading nested jars is org.springframework.boot.loader.jar.JarFile
. It lets you load jar content from a standard jar file or
from nested child jar data. When first loaded, the location of each JarEntry
is mapped to a physical file offset of the outer jar, as shown in the following example:
myapp.jar +-------------------+-------------------------+ | /BOOT-INF/classes | /BOOT-INF/lib/mylib.jar | |+-----------------+||+-----------+----------+| || A.class ||| B.class | C.class || |+-----------------+||+-----------+----------+| +-------------------+-------------------------+ ^ ^ ^ 0063 3452 3980
The preceding example shows how A.class
can be found in /BOOT-INF/classes
in myapp.jar
at position 0063
. B.class
from the nested jar can actually be found in myapp.jar
at position 3452
, and C.class
is at position 3980
.
Armed with this information, we can load specific nested entries by seeking to the appropriate part of the outer jar. We do not need to unpack the archive, and we do not need to read all entry data into memory.
Compatibility With the Standard Java “JarFile”
Spring Boot Loader strives to remain compatible with existing code and libraries. org.springframework.boot.loader.jar.JarFile
extends from java.util.jar.JarFile
and should work as a drop-in replacement. The getURL()
method returns a URL
that opens a connection compatible with java.net.JarURLConnection
and can be used with Java’s URLClassLoader
.
.E.3. Launching Executable Jars
The org.springframework.boot.loader.Launcher
class is a special bootstrap class that is used as an
executable jar’s main entry point. It is the actual Main-Class
in your jar file, and it is used to setup an appropriate URLClassLoader
and ultimately call your main()
method.
There are three launcher subclasses (JarLauncher
, WarLauncher
, and PropertiesLauncher
). Their purpose is to load resources (.class
files and so on) from nested jar files or war files in directories (as opposed to those explicitly on the classpath). In the case of JarLauncher
and WarLauncher
, the nested paths are fixed. JarLauncher
looks in
BOOT-INF/lib/
, and WarLauncher
looks in WEB-INF/lib/
and WEB-INF/lib-provided/
. You can add extra jars in those locations if you want more. The PropertiesLauncher
looks in BOOT-INF/lib/
in your application archive by default. You can add additional locations by setting an environment variable called LOADER_PATH
or loader.path
in loader.properties
(which is a comma-separated list of directories, archives, or directories within archives).
Launcher Manifest
You need to specify an appropriate Launcher
as the Main-Class
attribute of META-INF/MANIFEST.MF
. The actual class that you want to launch (that is, the class that contains a main
method) should be specified in the Start-Class
attribute.
The following example shows a typical MANIFEST.MF
for an executable jar file:
Main-Class: org.springframework.boot.loader.JarLauncher Start-Class: com.mycompany.project.MyApplication
For a war file, it would be as follows:
Main-Class: org.springframework.boot.loader.WarLauncher Start-Class: com.mycompany.project.MyApplication
You need not specify Class-Path entries in your manifest file. The classpath is deduced from the nested jars.
|
.E.4. PropertiesLauncher Features
PropertiesLauncher
has a few special features that can be enabled with external properties (System properties, environment variables, manifest entries, or loader.properties
). The following table describes these properties:
Key | Purpose |
---|---|
| Comma-separated Classpath, such as |
| Used to resolve relative paths in |
| Default arguments for the main method (space separated). |
| Name of main class to launch (for example, |
| Name of properties file (for example, |
| Path to properties file (for example, |
| Boolean flag to indicate that all properties should be added to System properties. It defaults to |
When specified as environment variables or manifest entries, the following names should be used:
Key | Manifest entry | Environment variable |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Build plugins automatically move the Main-Class attribute to Start-Class when the fat jar is built. If you use that, specify the name of the class to launch by using the Main-Class attribute and leaving out Start-Class .
|
The following rules apply to working with PropertiesLauncher
:
loader.properties
is searched for inloader.home
, then in the root of the classpath, and then inclasspath:/BOOT-INF/classes
. The first location where a file with that name exists is used.loader.home
is the directory location of an additional properties file (overriding the default) only whenloader.config.location
is not specified.loader.path
can contain directories (which are scanned recursively for jar and zip files), archive paths, a directory within an archive that is scanned for jar files (for example,dependencies.jar!/lib
), or wildcard patterns (for the default JVM behavior). Archive paths can be relative toloader.home
or anywhere in the file system with ajar:file:
prefix.loader.path
(if empty) defaults toBOOT-INF/lib
(meaning a local directory or a nested one if running from an archive). Because of this,PropertiesLauncher
behaves the same asJarLauncher
when no additional configuration is provided.-
loader.path
can not be used to configure the location ofloader.properties
(the classpath used to search for the latter is the JVM classpath whenPropertiesLauncher
is launched). Placeholder replacement is done from System and environment variables plus the properties file itself on all values before use.
The search order for properties (where it makes sense to look in more than one place) is environment variables, system properties,
loader.properties
, the exploded archive manifest, and the archive manifest.
.E.5. Executable Jar Restrictions
You need to consider the following restrictions when working with a Spring Boot Loader packaged application:
Zip entry compression: The
ZipEntry
for a nested jar must be saved by using theZipEntry.STORED
method. This is required so that we can seek directly to individual content within the nested jar. The content of the nested jar file itself can still be compressed, as can any other entries in the outer jar.
System classLoader: Launched applications should use
Thread.getContextClassLoader()
when loading classes (most libraries and frameworks do so by default). Trying to load nested jar classes withClassLoader.getSystemClassLoader()
fails.java.util.Logging
always uses the system classloader. For this reason, you should consider a different logging implementation.
Appendix F: Dependency Versions
This appendix provides details of the dependencies that are managed by Spring Boot.
.F.1. Managed Dependency Coordinates
The following table provides details of all of the dependency versions that are provided by Spring Boot in its CLI (Command Line Interface), Maven dependency management, and Gradle plugin. When you declare a dependency on one of these artifacts without declaring a version, the version listed in the table is used.
Group ID | Artifact ID | Version |
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
.F.2. Version Properties
The following table provides all properties that can be used to override the versions managed by Spring Boot. Browse the
spring-boot-dependencies
build.gradle for a complete list of dependencies. You can learn how to customize these versions in your application in the Build Tool Plugins documentation.
Library | Version Property |
---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|