top of page
Search
kathline5bib

Hey Configurator, Get Me Some Logs! - Tips and Tricks for Configuring Logging Options in Configurati



If you look at the logs on the broker it will tell your the name of the client that sent it. You might also see the client connect with the IP address but the connection might have been a while ago as it might have been left open.




Hey Configurator, Get Me Some Logs!



Python provides a logging system as a part of its standard library, so you can quickly add logging to your application. In this article, you will learn why using this module is the best way to add logging to your application as well as how to get started quickly, and you will get an introduction to some of the advanced features available.


While you can pass any variable that can be represented as a string from your program as a message to your logs, there are some basic elements that are already a part of the LogRecord and can be easily added to the output format. If you want to log the process ID along with the level and message, you can do something like this:


Handlers come into the picture when you want to configure your own loggers and send the logs to multiple places when they are generated. Handlers send the log messages to configured destinations like the standard output stream or a file or over HTTP or to your email via SMTP.


In order to debug a .NET Core app which is failing on startup, I would like to write logs from within the startup.cs file. I have logging setup within the file that can be used in the rest of the app outside the startup.cs file, but not sure how to write logs from within the startup.cs file itself.


In your project, uninclude the log4j-1.2 jar and instead, include the log4j-1.2-api-2.1.jar. I wasn't sure how exactly to exclude the log4j 1.2. I knew that what dependency of my project was requiring it. So, with some reading, I excluded a bunch of stuff.


I've spotted (from help in docs/admin/files-directories.html#application-logs) the deconstructed app stdout and stderr logs in e.g. folders like /var/lib/rstudio-connect/jobs/45/yGVowwKSbridm8KU/ and I've seen that there are files in there like bundle which can help me reference back to the /var/lib/rstudio-connect/apps and /var/lib/rstudio-connect/bundles folders.


However, I'm struggling a bit trying to work out how to ingest the logs with app.markdown metadata into splunk.e.g. if I'm running a shiny app deployed as "EarthquakeTracker" then I'd like to have a tag in splunk which allows me to search for all "EarthquakeTracker" logs.


I'm wondering if I can do something at the admin level by including a line in the log files - e.g. somehow getting RStudio Connect to inject the Shiny/Markdown/Plumber name into the stdout alongside the "Starting R with process ID" header... If I can find a way to identify the current running app (e.g. from the session or serverInfo then I can probably do this during Rprofile.site startup...)


One noteworthy "problem" with the title is that titles can change over time. I would think you would want to do something immutable like App ID or App GUID for the tagging, but it'd be ideal if there was a lookup key within splunk to give it a "pretty" name. Is that a "thing"? Would it be a problem if changing the title disassociated the content / created a new tag in Splunk?


There are ways we can lookup some of that info... but by far the simplest way would be if that info came from folder/file structure and/or from structured log lines (e.g. from outputting the info in json on disk) - could we look at some way to (optionally) change the way RStudio Connect logs data?


Unfortunately, Connect doesn't have a way to change log output / structure today, although this is something that we are very interested in exploring. The only option today would be some type of "pre-aggregator" that parses the logs / rewrites them. I think the level of effort there is enough to discourage the exercise - the easiest path is probably our improving the logging process / flexibility out of the box.


One option I have not mentioned yet, the connectapi package makes use of some private / internal APIs, one of which is exposing the logs associated with "jobs." Unfortunately, there is no way to "stream" this output, so it is pretty one-dimensional, but the API would provide opportunity to get much of this information. If you're interested in that approach, get_jobs() or get_job() are the functions you will want to explore!


That said, this is definitely on our roadmap and is a pain point that I wrestle with often personally. It is a part of a larger arc of "admin-facing" features that make our products work better in enterprise environments, the cloud, etc. It is definitely a priority for us, and something we are actively working towards. Unfortunately, how long until / whether it will meet your needs is something we just don't have any information on.


Sorry for not responding to this! We are unfortunately a bit cagey about our development and release plans / timelines / etc. as occasionally we alter our plans based on (1) whether a feature turns out to be more complex than originally planned, (2) whether we misperceived the customer value, (3) whether something else comes out of nowhere as a very high priority.


Definitely feel free to reach out to your customer success representative if/when you have roadmap questions / suggestions / etc. (Or share feedback here! Although sometimes our responsiveness struggles )


We have open-sourced an in-house solution for exactly this issue -- forwardings logs to Splunk. Forum post is here: Announcing a way to forward logs from RSConnect to Elasticsearch, Splunk, and many others


Important: This reference is for the earlier CloudWatch Logs agent that is no longer supported. If you use Instance Metadata Service Version 2 (IMDSv2), then you must use the new unified CloudWatch agent. Even if you aren't using IMDSv2, it's a best practice to use the newer unified CloudWatch agent instead of the logs agent.


As you can see, the debugging records now appear in the output. Like we mentioned previously, besides changing the log level, you can also decide how to process these messages. For example, say you want to write the logs to a file, you can then use the FileHandler like this:


The FileHandler is just one of many useful build-in handlers. In Python, there is even an SMTP log hander for sending records to your mailbox or one for sending the logs to an HTTP server. Cannot find a handler for your need? No worries, you can also write your own custom logging handler if you want. For more details, please reference official documents: Basic Tutorial, Advanced Tutorial and Logging Cookbook.


With that, say you want to configure the loggers for all modules under a package, it can be done easily by configuring the logger with the package as the name. For example, you notice that the log messages from the modules in the foo.eggs package are too verbose, you want to filter most of them out. With the configuration hierarchy, you can configure foo.eggs package to a higher logging level, say WARNING. In this way, for the logs messages from


If you use FileHandler for writing logs, the size of the log file will grow with time. Someday, it will occupy all of your disk space. To avoid that situation, you should use RotatingFileHandler instead of FileHandler in the production environment.


@TonyJK Thank you for posting in Microsoft Q&A forum. The important log in case of client installation in the agent is the ccmsetup.log and clientmsi.log these logs are present in the location C:\windows\ccmsetup\logs. When we start troubleshooting client push installation the first log we need to check is ccm.log in the server logs(%ProgramFiles%\Configuration Manager\Logs), from ccm.log you can see the start of the request made to the client and you can verify if the ccmsetup.exe service was able to successfully run on the targeted client machine. For more details, we may refer to this article: -us/archive/blogs/sudheesn/troubleshooting-sccm-part-i-client-push-installation


As a security best practice, add an aws:SourceArn condition key to the Amazon S3 bucket policy. The IAM global condition key aws:SourceArn helps ensure that CloudTrail writes to the S3 bucket only for a specific trail or trails. The value of aws:SourceArn is always the ARN of the trail (or array of trail ARNs) that is using the bucket to store logs. Be sure to add the aws:SourceArn condition key to S3 bucket policies for existing trails.


When you create a new bucket as part of creating or updating a trail, CloudTrail attaches the required permissions to your bucket. The bucket policy uses the service principal name, "cloudtrail.amazonaws.com", which allows CloudTrail to deliver logs for all regions.


If CloudTrail is not delivering logs for a region, it's possible that your bucket has an older policy that specifies CloudTrail account IDs for each region. This policy gives CloudTrail permission to deliver logs only for the regions specified.


As a best practice, update the policy to use a permission with the CloudTrail service principal. To do this, replace the account ID ARNs with the service principal name: "cloudtrail.amazonaws.com". This gives CloudTrail permission to deliver logs for current and new regions. As a security best practice, add an aws:SourceArn or aws:SourceAccount condition key to the Amazon S3 bucket policy. This helps prevent unauthorized account access to your S3 bucket. If you have existing trails, be sure to add one or more condition keys. The following example shows a recommended policy configuration. Replace myBucketName, [optionalPrefix]/, myAccountID, region, and trailName with the appropriate values for your configuration.


If you try to add, modify, or remove a log file prefix for an S3 bucket that receives logs from a trail, you might see the error: There is a problem with the bucket policy. A bucket policy with an incorrect prefix can prevent your trail from delivering logs to the bucket. To resolve this issue, use the Amazon S3 console to update the prefix in the bucket policy, and then use the CloudTrail console to specify the same prefix for the bucket in the trail. 2ff7e9595c


0 views0 comments

Recent Posts

See All

Comentarios


bottom of page