The main helper for solving this is the following article:
By following the instructions there, I was able to identify an example failure. The rule which I added and which solved it in the end was the following:
# At the top of "/etc/fapolicyd/rules.d/30-patterns.rules"
allow perm=open exe=/runc : ftype=application/x-sharedlib trust=1
Followed by running:
systemctl start fapolicyd
fapolicyd-cli --reload #this reload may be extraneous really
There are a handful of articles out there which ask this same question but none which answer it, so hopefully this helps.
* https://forums.docker.com/t/using-docker-ce-with-fapolicyd/147313
* https://forums.docker.com/t/disa-stig-and-docker-ce/134196
* https://www.reddit.com/r/redhat/comments/xvigky/fapolicy_troubleshooting/
From Stripe:
Stripe.js only uses essential cookies to ensure the site works properly, detect and prevent fraud, and understand how people interact with Stripe.
Could you please try this line after adding actions to the alert controller instance?
[listSheet setModalPresentationStyle:UIModalPresentationPopover];
Simple way is:
1 - Go to developer options.
2 - Wireless Depuration
3 - Sync device with sync code
4 - After Open 3 option of this list, run abd pair <ip of 3 option menu>
5 - Digit pin code from your device in bash
6 - After success, run **abd connect <ip of 2 option menu>
Then use your device wireless in flutter!
** When you disconnect from wifi or go away from signal, you need replay this steps again!
From my understanding, ipvlan l3 is needed/used when you want to do something way more complex. It essentially turns the host into a router, so you get a bunch of complications because of it - like your containers not being able to access the internet, because upstream routers don't know how to route traffic back to you.
You will never want to do this as a developer, as this feature does not target you at all. You will want this as an infrastructure/networking nerd if you would want to optimize/customize the network. Think along the lines of kubernetes, but even that uses a way more complex networking setup + it "just works", meanwhile ipvlan l3 leaves you half way there.
This site you can validate the exact problem line
https://jsonformatter.tech/
And show you the right way
You can share your Flutter app UI with a remote client by using Flutter's Hot Reload for live updates or deploy the app to a platform like Firebase for easy web access. Tools like Appetize.io and Expo also allow you to preview the app without generating an APK. For inspiration, check out AvatarWorld APK at avatarworldapkk.com, which offers seamless client interaction and easy access for reviews.
4o
It’s possible that something went wrong later in the CI process — maybe during bundling or exporting the archive. Would be great to double-check the actual CI-generated build to see if the asset is really there.
Just a few things to clarify:
Is the asset part of the main project or coming from a separate Swift Package?
What’s the target membership of the .xcassets
file that contains it?
Also, if you can share the crash log (feel free to redact any sensitive info), that might help pinpoint exactly what’s failing.
In my case, it is because of Dart version -- I figured this out when updating Flutter from an older version to newer ones.
Upgrading to Dart 3.3.4 (came with Flutter 3.19.6 with `fvm`) solved this issue.
Make sure you cleared cookies after upgrading Rails. It's not Devise, you may just have a session cookie in old format, insecure
use this
<script>
function newMsg() {
document.getElementById("add_message").innerHTML = `
<div class="message">
Add Message<br>
Title: <input type="text" id="title"><br>
Text: <input type="text" id="message"><br><br>
</div>
`;
}
</script>
Matt is correct. Use keyword argument any(axis=1) should work.
We are taking you off his data all acess to accounts. He doesn't want a divorce. Over 2 years you've been asked to.stop. now FBI can and will.do their job. If I see any more. Ill.be pressing charges against you..I have all documents.. Under age lying about so much. Goodbye. Ill.also.see if you talk after this
Quote: CDO is pretty old now so assume that is an example of an app that doesn't support latest security standards.
What are the alternatives to create a script or batch file to send email? I can only get CDO to work with servers that support SSL set to false. As soon as I set SSL to true it fails to connect and I know at least one server I tested with definitely supports SSL on port 465 and startTLS or ports 25 and 587.
Sorry for this stupid question, I found the solution here BigQuery: Extract values of selected keys from an array of json objects
select ARRAY(
SELECT JSON_EXTRACT_SCALAR(json_array, '$.start') from UNNEST(JSON_EXTRACT_ARRAY(metadata,"$.mentions"))json_array
) as extracted_start
TL:DR nvarchar(max) is inefficient and should be avoided.
Queries against an nvarchar(max) field use more kb than queries against an nvarchar(10) field, even if the data stored within the two fields is the same. So the performance will be noticably and measurably worse, which should be avoided.
At 47 minutes Tim Corey provides a pretty good explanation of this, complete with outside sources: https://www.youtube.com/watch?v=qkJ9keBmQWo.
Welp. RTFM. https://learn.microsoft.com/en-us/graph/api/shares-get?view=graph-rest-1.0&tabs=http
async function getDriveItemBySharedLink(sharedLink) {
// First, use base64 encode the URL.
const base64 = Buffer.from(sharedLink).toString("base64");
// Convert the base64 encoded result to unpadded base64url format by removing = characters from the end of the value, replacing / with _ and + with -.)
const converted = base64
.replace(/=/g, "")
.replace(/\+/g, "-")
.replace(/\//g, "_");
// Append u! to be beginning of the string.
const updatedLink = `u!${converted}`;
const getDownloadURL = `https://graph.microsoft.com/v1.0/shares/${updatedLink}/driveItem`;
const authResponse = await auth.getToken();
const dirResponse = await axios.get(getDownloadURL, {
headers: {
Authorization: `Bearer ${authResponse.accessToken}`,
},
});
return dirResponse.data;
}
Getting same error with Janusgraph 1.1.0, tried everything already... Any ideas how to resolve it with lucene/berkeley?
here are my files:
/etc/systemd/system/janusgraph.service ::
[Unit]
Description = JanusGraph Server
Wants=network.target
After=local-fs.target network.target
[Service]
User = janusgraph
Group= janusgraph
Type = forking
ExecStart = /opt/janusgraph/bin/janusgraph-server.sh start
ExecStop = /opt/janusgraph/bin/janusgraph-server.sh stop
TimeoutStartSec=60
EnvironmentFile=/etc/janusgraph/janusgraph.env
Restart=on-failure
WorkingDirectory=/opt/janusgraph/
[Install]
WantedBy = multi-user.target
/etc/janusgraph/janusgraph.env ::
PATH=/usr/lib/jvm/java-11-openjdk-amd64/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64
LANG=en_US.UTF-8
LANGUAGE=en_US:en
LC_ALL=en_US.UTF-8
JAVA_VERSION=jdk-11.0.27+6
JANUS_VERSION=1.1.0
JANUS_HOME=/opt/janusgraph
JANUS_CONFIG_DIR=/opt/janusgraph/conf/gremlin-server
JANUS_DATA_DIR=/var/lib/janusgraph
JANUS_SERVER_TIMEOUT=30
JANUS_STORAGE_TIMEOUT=60
JANUS_PROPS_TEMPLATE=berkeleyje-lucene
JANUS_INITDB_DIR=/docker-entrypoint-initdb.d
/opt/janusgraph/conf/gremlin-server/gremlin-server.yaml ::
host: 0.0.0.0
port: 8182
evaluationTimeout: 30000
channelizer: org.apache.tinkerpop.gremlin.server.channel.WebSocketChannelizer
graphManager: org.janusgraph.graphdb.management.JanusGraphManager
graphs:
ConfigurationManagementGraph: /opt/janusgraph/conf/janusgraph.properties
graph: /opt/janusgraph/conf/janusgraph-berkeleyje-lucene.properties
scriptEngines: {
gremlin-groovy: {
plugins: { org.janusgraph.graphdb.tinkerpop.plugin.JanusGraphGremlinPlugin: {},
org.apache.tinkerpop.gremlin.server.jsr223.GremlinServerGremlinPlugin: {},
org.apache.tinkerpop.gremlin.tinkergraph.jsr223.TinkerGraphGremlinPlugin: {},
org.apache.tinkerpop.gremlin.jsr223.ImportGremlinPlugin: {classImports: [java.lang.Math], methodImports: [java.lang.Math#*]},
org.apache.tinkerpop.gremlin.jsr223.ScriptFileGremlinPlugin: {files: [scripts/empty-sample.groovy]}}}}
processors:
- { className: org.apache.tinkerpop.gremlin.server.op.session.SessionOpProcessor, config: { sessionTimeout: 28800000 }}
- { className: org.apache.tinkerpop.gremlin.server.op.traversal.TraversalOpProcessor, config: { cacheExpirationTime: 600000, cacheMaxSize: 1000 }}
metrics: {
consoleReporter: {enabled: true, interval: 180000},
csvReporter: {enabled: true, interval: 180000, fileName: /tmp/gremlin-server-metrics.csv},
jmxReporter: {enabled: true},
slf4jReporter: {enabled: true, interval: 180000},
graphiteReporter: {enabled: false, interval: 180000}}
maxInitialLineLength: 4096
maxHeaderSize: 8192
maxChunkSize: 8192
maxContentLength: 65536
maxAccumulationBufferComponents: 1024
resultIterationBatchSize: 64
writeBufferLowWaterMark: 32768
writeBufferHighWaterMark: 65536
/opt/janusgraph/conf/janusgraph.properties ::
gremlin.graph=org.janusgraph.core.JanusGraphFactory
storage.backend=berkeleyje
storage.directory=/var/lib/janusgraph/cm
index.search.backend=lucene
index.search.directory=/var/lib/janusgraph/cm-index
graph.graphname=ConfigurationManagementGraph
graph.allow-upgrade=true
storage.transactions=true
storage.berkeleyje.cache-percentage=35
storage.berkeleyje.isolation-level=READ_COMMITTED
/opt/janusgraph/conf/janusgraph-berkeleyje-lucene.properties ::
gremlin.graph=org.janusgraph.core.JanusGraphFactory
storage.backend=berkeleyje
storage.directory=/var/lib/janusgraph/berkeleyje
index.search.backend=lucene
index.search.directory=/var/lib/janusgraph/index
storage.berkeleyje.cache-percentage=35
storage.berkeleyje.isolation-level=READ_COMMITTED
/opt/janusgraph/conf/remote.yaml
hosts: [localhost]
port: 8182
serializer: { className: org.apache.tinkerpop.gremlin.util.ser.GraphBinaryMessageSerializerV1, config: { serializeResultToString: true }}
/opt/janusgraph/logs/janusgraph.log ::
23:28:04 INFO org.janusgraph.graphdb.server.JanusGraphServer.printHeader -
mmm mmm #
# mmm m mm m m mmm m" " m mm mmm mmmm # mm
# " # #" # # # # " # mm #" " " # #" "# #" #
# m"""# # # # # """m # # # m"""# # # # #
"mmm" "mm"# # # "mm"# "mmm" "mmm" # "mm"# ##m#" # #
#
"
23:28:04 INFO com.jcabi.log.Logger.infoForced - 108 attributes loaded from 345 stream(s) in 114ms, 108 saved, 5608 ignored: ["Agent-Class", "Ant-Version", "Archiver-Version", "Automatic-Module-Name", "Bnd-LastModified", "BoringSSL-Branch", "BoringSSL-Revision", "Build-Date", "Build-Date-UTC", "Build-Id", "Build-Java-Version", "Build-Jdk", "Build-Jdk-Spec", "Build-Number", "Build-Tag", "Build-Timezone", "Build-Version", "Built-By", "Built-JDK", "Built-OS", "Built-Status", "Bundle-ActivationPolicy", "Bundle-Activator", "Bundle-Category", "Bundle-ClassPath", "Bundle-Classpath", "Bundle-ContactAddress", "Bundle-Copyright", "Bundle-Description", "Bundle-Developers", "Bundle-DocURL", "Bundle-License", "Bundle-ManifestVersion", "Bundle-Name", "Bundle-NativeCode", "Bundle-RequiredExecutionEnvironment", "Bundle-SCM", "Bundle-SymbolicName", "Bundle-Vendor", "Bundle-Version", "Can-Redefine-Classes", "Can-Retransform-Classes", "Can-Set-Native-Method-Prefix", "Carl-Is-Awesome", "Change", "Copyright", "Created-By", "DSTAMP", "Dependencies", "DynamicImport-Package", "Eclipse-BuddyPolicy", "Eclipse-ExtensibleAPI", "Embed-Dependency", "Embed-Transitive", "Export-Package", "Extension-Name", "Extension-name", "Fragment-Host", "Gradle-Version", "Gremlin-Plugin-Dependencies", "Ignore-Package", "Implementation-Build", "Implementation-Build-Date", "Implementation-Build-Id", "Implementation-Title", "Implementation-URL", "Implementation-Vendor", "Implementation-Vendor-Id", "Implementation-Version", "Import-Package", "Include-Resource", "JCabi-Build", "JCabi-Date", "JCabi-Version", "Main-Class", "Manifest-Version", "Module-Origin", "Module-Requires", "Multi-Release", "Originally-Created-By", "Package", "Premain-Class", "Private-Package", "Provide-Capability", "Require-Bundle", "Require-Capability", "Sealed", "Specification-Title", "Specification-Vendor", "Specification-Version", "TODAY", "TSTAMP", "Target-Label", "Tool", "X-Compile-Elasticsearch-Snapshot", "X-Compile-Elasticsearch-Version", "X-Compile-Lucene-Version", "X-Compile-Source-JDK", "X-Compile-Target-JDK", "artifactId", "groupId", "hash", "janusgraphVersion", "service", "tinkerpop-version", "tinkerpopVersion", "url", "version"]
23:28:04 INFO org.janusgraph.graphdb.server.JanusGraphServer.printHeader - JanusGraph Version: 1.1.0
23:28:04 INFO org.janusgraph.graphdb.server.JanusGraphServer.printHeader - TinkerPop Version: 3.7.3
23:28:04 INFO org.janusgraph.graphdb.server.JanusGraphServer.start - Configuring JanusGraph Server from /opt/janusgraph/conf/gremlin-server/gremlin-server.yaml
23:28:04 INFO org.apache.tinkerpop.gremlin.server.util.MetricManager.addConsoleReporter - Configured Metrics ConsoleReporter configured with report interval=180000ms
23:28:04 INFO org.apache.tinkerpop.gremlin.server.util.MetricManager.addCsvReporter - Configured Metrics CsvReporter configured with report interval=180000ms to fileName=/tmp/gremlin-server-metrics.csv
23:28:04 INFO org.apache.tinkerpop.gremlin.server.util.MetricManager.addJmxReporter - Configured Metrics JmxReporter configured with domain= and agentId=
23:28:04 INFO org.apache.tinkerpop.gremlin.server.util.MetricManager.addSlf4jReporter - Configured Metrics Slf4jReporter configured with interval=180000ms and loggerName=org.apache.tinkerpop.gremlin.server.Settings$Slf4jReporterMetrics
23:28:04 INFO org.apache.commons.beanutils.FluentPropertyBeanIntrospector.introspect - Error when creating PropertyDescriptor for public final void org.apache.commons.configuration2.AbstractConfiguration.setProperty(java.lang.String,java.lang.Object)! Ignoring this property.
23:28:05 INFO org.janusgraph.diskstorage.configuration.builder.ReadConfigurationBuilder.setupTimestampProvider - Set default timestamp provider MICRO
23:28:05 INFO org.janusgraph.graphdb.idmanagement.UniqueInstanceIdRetriever.getOrGenerateUniqueInstanceId - Generated unique-instance-id=7f0001015381-ubuntu1
23:28:05 INFO org.janusgraph.diskstorage.Backend.getIndexes - Configuring index [search]
23:28:05 INFO org.janusgraph.diskstorage.configuration.ExecutorServiceBuilder.buildFixedExecutorService - Initiated fixed thread pool of size 4
23:28:05 INFO org.janusgraph.graphdb.database.StandardJanusGraph.<init> - Gremlin script evaluation is disabled
23:28:05 INFO org.janusgraph.diskstorage.log.kcvs.KCVSLog$MessagePuller.initializeTimepoint - Loaded unidentified ReadMarker start time 2025-05-21T20:28:05.844963Z into org.janusgraph.diskstorage.log.kcvs.KCVSLog$MessagePuller@39de9bda
23:28:06 INFO org.janusgraph.graphdb.idmanagement.UniqueInstanceIdRetriever.getOrGenerateUniqueInstanceId - Generated unique-instance-id=7f0001015381-ubuntu2
23:28:06 INFO org.janusgraph.diskstorage.Backend.getIndexes - Configuring index [search]
23:28:06 INFO org.janusgraph.diskstorage.configuration.ExecutorServiceBuilder.buildFixedExecutorService - Initiated fixed thread pool of size 4
23:28:06 INFO org.janusgraph.graphdb.database.StandardJanusGraph.<init> - Gremlin script evaluation is disabled
23:28:06 INFO org.janusgraph.diskstorage.log.kcvs.KCVSLog$MessagePuller.initializeTimepoint - Loaded unidentified ReadMarker start time 2025-05-21T20:28:06.183399Z into org.janusgraph.diskstorage.log.kcvs.KCVSLog$MessagePuller@5927f904
23:28:06 INFO org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor.<init> - Initialized Gremlin thread pool. Threads in pool named with pattern gremlin-*
23:28:06 INFO org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor.<init> - Initialized GremlinExecutor and preparing GremlinScriptEngines instances.
23:28:08 INFO org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor.lambda$new$4 - Initialized gremlin-groovy GremlinScriptEngine and registered metrics
23:28:08 INFO org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor.lambda$new$8 - A GraphTraversalSource is now bound to [g] with graphtraversalsource[standardjanusgraph[berkeleyje:/var/lib/janusgraph/berkeleyje], standard]
23:28:08 INFO org.apache.tinkerpop.gremlin.server.op.OpLoader.lambda$static$0 - Adding the standard OpProcessor.
23:28:08 INFO org.apache.tinkerpop.gremlin.server.op.OpLoader.lambda$static$0 - Adding the session OpProcessor.
23:28:08 INFO org.apache.tinkerpop.gremlin.server.op.OpLoader.lambda$static$0 - Adding the traversal OpProcessor.
23:28:08 INFO org.apache.tinkerpop.gremlin.server.GremlinServer.lambda$start$1 - Executing start up LifeCycleHook
23:28:08 INFO org.codehaus.groovy.vmplugin.v8.IndyInterface.fromCache - Executed once at startup of Gremlin Server.
23:28:08 INFO org.apache.tinkerpop.gremlin.server.GremlinServer.createChannelizer - idleConnectionTimeout was set to 0 which resolves to 0 seconds when configuring this value - this feature will be disabled
23:28:08 INFO org.apache.tinkerpop.gremlin.server.GremlinServer.createChannelizer - keepAliveInterval was set to 0 which resolves to 0 seconds when configuring this value - this feature will be disabled
23:28:08 INFO org.apache.tinkerpop.gremlin.server.AbstractChannelizer.lambda$configureSerializers$4 - Configured application/vnd.graphbinary-v1.0 with org.apache.tinkerpop.gremlin.util.ser.GraphBinaryMessageSerializerV1
23:28:08 INFO org.apache.tinkerpop.gremlin.server.AbstractChannelizer.lambda$configureSerializers$4 - Configured application/vnd.graphbinary-v1.0-stringd with org.apache.tinkerpop.gremlin.util.ser.GraphBinaryMessageSerializerV1
23:28:08 INFO org.apache.tinkerpop.gremlin.server.AbstractChannelizer.lambda$configureSerializers$4 - Configured application/vnd.gremlin-v3.0+json with org.apache.tinkerpop.gremlin.util.ser.GraphSONMessageSerializerV3
23:28:08 INFO org.apache.tinkerpop.gremlin.server.AbstractChannelizer.lambda$configureSerializers$4 - Configured application/json with org.apache.tinkerpop.gremlin.util.ser.GraphSONMessageSerializerV3
23:28:08 INFO org.apache.tinkerpop.gremlin.server.GremlinServer$1.operationComplete - Gremlin Server configured with worker thread pool of 1, gremlin pool of 2 and boss thread pool of 1.
23:28:08 INFO org.apache.tinkerpop.gremlin.server.GremlinServer$1.operationComplete - Channel started at port 8182.
I'm having a similar problem, wondering if there's a solution that you landed on
Thanks!
In my case i only copy and cut from myslq_old/ (10.4.32) all foloders, only folders (mysql, performance_schema, phpmyadmin, test) and my databases on my new foldwe mysql/ (10.11.10).
Run mysql from xqmpp and thats all.
It’s annoying that jfrog doesn’t support these basic feature in modern day browsing , still after many years , it doesn’t allowing sorting by date
from tensorflow.keras.models import load_model
model = load_model('/mypath/model.h5')
Sorry. Found it:
global::B.C.Class3
I think this may be a problem with whatever display software you're using for the file. When I load "ne_110m_admin_0_countries.shp" using https://mapshaper.org/ it works just fine, and I can not find any lines over Greenland.
I ran into the same issue and investigated it.
Go uses a Windows API function called TransmitFile to transmit file data over connected sockets. Workstation and client versions of Windows limit the number of concurrent TransmitFile operations allowed on the system to a maximum of two. This is what causes the issue.
I reported this and submitted a change that makes Go avoid TransmitFile in such cases. The change has been merged and should be included in the next release of Go.
See:
In PyCharm 2025.1.1. This is a bunch of more detailed options
Settings --> Editor -->Color Scheme-->Editor Gutter
Unset the checkboxes
I’ve run into the same thing and was also confused since the wording in the UI and docs suggests modules and callables might be preserved. Looks like the "Remove all variables" action doesn't differentiate, even with "Exclude callables and modules" enabled, so probably worth keeping an eye on the GitHub issue you opened.
I found adding the "Security" Folder and these settings to my registry fixed my issue. From this article:
https://knowledge.digicert.com/solution/timestamp-vba-projects
Computer\HKEY_CURRENT_USER\SOFTWARE\Microsoft\VBA\Security
Registry settings:
*If the above folder does not exist, manually go to the VBA folder, right click, and add a new key called Security
(STRING VALUE) Name: TimeStampURL Value: http://timestamp.digicert.com
(DWORD) Name: TimeStampRetryCount Value: 3
(DWORD) Name: TimeStampRetryDelay Value: 5
(DWORD) Name: V1HashEnhanced Value: 3
@JulienD's post almost did it for me: https://stackoverflow.com/a/27501039/10761353 (go upvote him too!)
The only hick was that I had a previous [url... insteadOf
entry in my ~/.gitconfig
Commenting out those 2 lines did the trick!
You can also create a Access Token in the Azure ACR and use this as a normal docker login.
Under "Repository Permissions" -> "Tokens"
I had this same issue after I upgraded my project from .NET 6.0 to .NET 8.0 and also upgraded my package references to the latest versions. I tried everything listed above but nothing worked. Finally, I downloaded the Azure functions samples from github and downgraded my package references to those in the FunctionApp.csproj file. After that, the functions appeared in the console.
This question had the answer: MS Access - Hide Columns on Subform
Forms![2_4_6 QA Review]![2_4_6 QA Review subform].Form.Controls("Raw_Item").Properties("ColumnHidden") = True
According to ccordoba12, this is not possible.
See the askubuntu.com's StackExchange same question Unable to install "<PACKAGE>": snap "<PACKAGE>" has "install-snap" change in progress for an excellent solution!
The very top answer there, shows you how to abort the ongoing "install-snap" change for spotify, by
runing snap changes
so see a list of ongoing changes
$ snap changes
...
123 Doing 2018-04-28T10:40:11Z - Install "spotify" snap
...
running sudo snap abort 123
to kill that running change operation.
Then you can install spotify with sudo snap install spotify
without the error.
I was able to do it slightly different way by #define default values and then declaring/defining the functions to get each of the params with a common macro.
#ifndef PARAM_ALPHA
#define PARAM_ALPHA (20)
#endif
#ifndef PARAM_BETA
#define PARAM_BETA (19)
#endif
#define DEFINE_ACCESSOR(name,macro_name) \
static inline unsigned int get\_##name(){return macro_name;}
#define PARAM_LIST(X) \
X(ALPHA,PARAM_ALPHA) \\
X(BETA,PARAM_BETA)
PARAM_LIST(DEFINE_ACCESSOR)
int main()
{
printf("\nAlpha: %d\n", get_ALPHA());
printf("\nBeta: %d\n", get_BETA());
}
I noticed compiler burps if I use "ifdef <something>" inside the inline C code.
So if I pass in -DPARAM_ALPHA=10 during compile time, thats the value I get. Otherwise I get default value of 20.
I encountered the same error and was confused, but I finally understood the situation. I found the following statement in the Google documentation:
Also, as of 2025-05-22, it seems that hd claim is not included if you authenticate with a Google Workspace Essentials Starter account.
In other words, this hd claim probably refers to the Google Workspace verified domain.
Hello @Marek, I'm trying to do the same thing as @Janko92. Does OpenClover still not print per-test coverage information in the XML report in the latest version? Thanks in advance!
Solved.
Need to add
context->setAllTensorsDebugState(true);
after
static DebugPrinter debug_listener;
context->setDebugListener(&debug_listener);
I've experienced a lot of pain with this so I built an eslint plugin ontop of eslint-plugin-import.
The purpose is to help developers clean up their imports and ensure their circular dependencies are real and not just from index <-> index shenanigans.
It still allows you to use index.ts structure for clean imports from external modules.
Perhaps it is useful for you
If you've already tried all the suggestions in the previous answers and are still encountering the error, try installing the latest Visual C++ Redistributable.
I had the same issue with Android Studio Meerkat 2025, and installing the redistributable resolved it for me.
ask chatgpt, it will always give you a good solution.
I ended up deleting the master and then recreating it. Fortunately, in our case, this wasn't a big deal because the changes were minimal and develop and release were current with those changes.
that's the worst question you can think of
what are you
your question is that worst even 13 year old would write better code than it
If your app becomes large or heavily state-driven, you might want to:
Use named routes for better readability.
Use state management tools (like Riverpod, Provider, Bloc) to decouple navigation logic.
Use GoRouter for declarative routing with parameters and results.
Multiplying by 33 does two things: it pushes the old hash aside, to the left, which leaves 5 zeros on the right. Then it adds a copy of the old hash to fill in those zeros. This is shown in the C code by the shift and add, which are used to speed the function's response. But why 33 and not 17 or 65? ASCII alphabet letters a-z have the value 0-26 in the 5 rightmost bits. This span is cleared by a 5-bit shift, but not a 4-bit shift. And a 6-bit shift (64) would not be as compact or frugal hash.
From a quick read what I gathered is there are essentially two ways you can accept a python dictionary:
&Bound<'py, PyDict>
where pyo3
automatically holds the GIL as long as your function runs.Py<PyDict>
, whenever you want to access it you have to get a hold of the GIL first with Python::with_gil
or similar.and two ways to work with it:
PyDictMethods
)PyDict
to a HashMap
(or other T: FromPyObject
)You can mix and match them as required, for example accepting a Bound
and working directly with the methods:
use pyo3::{prelude::*, types::PyDict};
#[pyfunction]
pub fn process_dict(map: &Bound<'_, PyDict>) -> PyResult<()> {
if let Some(value) = map.get_item("apple")? {
println!("Value for 'apple': {}", value);
} else {
println!("Key not found");
}
Ok(())
}
Which has the advantage of you not having to care about the GIL and also no overhead necessary to convert the Python dict to a Rust type. The disadvantages are that the GIL is held for the entire runtime of the function and you're limited to what the python interface has to offer.
Or accepting a GIL independent Py
and converting the value to a Rust type:
use std::collections::HashMap;
use pyo3::{prelude::*, types::PyDict};
#[pyfunction]
pub fn process_dict(map: Py<PyDict>) -> PyResult<()> {
Python::with_gil(|gil| {
let map: HashMap<String, i64> = map.extract(gil).unwrap();
if let Some(value) = map.get("apple") {
println!("Value for my 'apple': {}", value);
} else {
println!("Key not found");
}
});
Ok(())
}
Advantages include having precise control where the GIL is held and getting to work with Rust native types while the disadvantages are the added complexity of handling the GIL as well as an overhead incurred for converting the PyDict
to a HashMap
So to answer your questions directly:
How to solve this error? What is expected here and why?
Pass in a Python
object that prooves you have the GIL because it's needed to safely access the dictionary.
Do I have to use the
extract
method here, is there a simpler method?
No, not at all necessary, you can directly work with a &Bound<'_, PyDict>
and it's methods instead.
Is the
map.extract()
function expensive?
Somewhat, it has to copy and convert the python dictionary to a Rust type.
You have to write your own type declarations. An example of this is in issue: https://github.com/publiclab/Leaflet.DistortableImage/issues/1392 It seems native declarations wont be added.
I encountered this error after adding a blazor web project to a windows service as a reference, I just removed the reference by moving the required services/code I needed into a separate class file.
Native .NET Delegates are immutable; once created, the invocation list of a delegate does not change.
This means that everytime you add or remove a subscriber the invocation list gets rebuilt causing gc pressure.
Since unity events use an actual list they do not.
For multicast delegates that are frequently subscribed/unsubscribed to it might be worth considering using a UnityEvent instead.
Does anyone have an idea on how to copy the metadata properly, or should I consider restructuring the pipeline for a better approach?
To achieve this, start by copying the list of files as an array. Then use a data flow to transform the array so each file name appears on its own row.
Use the Get Metadata activitys to retrieve the list of files from your source and destination blob container.
Use filter activity to filter out non existing files.
Use a For Each activity to loop through the filtered list and Use append variable to store each file name to a variable in array format.
Create a dummy file with only one value and use the Copy Activity to append the append variable value to it as an additional column.
Filenames
):split(replace(replace(Filenames, '[', ''), ']', ''), ',')
check this similar issue : https://learn.microsoft.com/en-us/answers/questions/912250/adf-write-file-names-into-file
This post might be 8 years old, but for anyone encountering it for the first time like me, you can find your dashboard at http://localhost/wordpress/wp-admin/profile.php , where "localhost/wordpress" is whatever comes up when you press the MAMP Wordpress link.
Change reference "System.Data.SqlClient" to "Microsoft.Data.SqlClient"
Can I associate the WebACL directly with the API Gateway instead?
Yeah the web ACL should be associated directly with the API Gateway. Edge-optimized API Gateway is still a regional resource so the web ACL should be created in the same region as the API Gateway.
Got it worked out. Ending up needing to add the sanitize: true parameter since HTML is included in the content.
Were you able to resolve this? I am having the same issue, except I have already confirmed that "Allow Duplicate Names" is activated and restarted the application. Every time I press ok to accept the duplicate name, the unique naming warning message appears again.
I did flutter run -v
and it started to work
There have been changes since this was answered. MutationObserver is pretty commonplace now.
I started using workspaces in uv and managed to find a very elegant solution to this problem. Here is an example on how I setup my projects with uv nowadays:
TLDR;
spring security couldn't find the jwkUri. Adding the line below fixed the issue.
.jwkSetUri(realmUrl + "/certs")
Ok so after adding the DEBUG option for spring security, which I completely forgot existed. I didn't get much wiser. There were no errors or anything of value shown in the DEBUG logs.
When I went digging some more in the docs I found the 'failureHandler'
.failureHandler((request, response, exception) -> {
exception.printStackTrace();
response.sendRedirect("/login?error=" + URLEncoder.encode(exception.getMessage(), StandardCharsets.UTF_8));
})
This showed that it couldn't find the jwk uri. After adding this line:
.jwkSetUri(realmUrl + "/certs")
to my clientRegistrationRepository, everything worked.
Thanks for the push in the right direction Toerktumlare
i do all solution that said here but still got error
One can use mongo embedded operator inside a query to extract the date from the _id.
I've used it to figure out the creation date of documents when retroactively needed them, by using:
{"createdAt": {"$toDate": "$_id"}}
Or any object id:
{"createdAt": {"$toDate": ObjectId("67e410e95889aedda612bcdf")}}
Cannot comment yet, but this is an extended version of what @greg p's query above. Might need to add the other fields if using different variable types/languages/etc.
CREATE OR REPLACE PROCEDURE EDW.PROC.GET_HUMAN_READABLE_PROCEDURE("P_FULLY_QUALIFIED_PROCEDURE_NAME" TEXT)
RETURNS TEXT
LANGUAGE SQL
EXECUTE AS OWNER
AS
DECLARE
final_ddl TEXT;
BEGIN
let db TEXT:= (split_part(P_FULLY_QUALIFIED_PROCEDURE_NAME,'.',1));
let schema_ TEXT:=(split_part(P_FULLY_QUALIFIED_PROCEDURE_NAME,'.',2));
let proc_name TEXT:=(split_part(P_FULLY_QUALIFIED_PROCEDURE_NAME,'.',3));
let info_schema_table TEXT:=(CONCAT(:db, UPPER('.information_schema.procedures')));
SELECT
'CREATE OR REPLACE PROCEDURE '||:P_FULLY_QUALIFIED_PROCEDURE_NAME||ARGUMENT_SIGNATURE||CHAR(13)
||'RETURNS '||DATA_TYPE||CHAR(13)
||'LANGUAGE '||PROCEDURE_LANGUAGE||CHAR(13)
||'EXECUTE AS OWNER'||CHAR(13)
||'AS '||CHAR(13)||PROCEDURE_DEFINITION||';'
INTO :final_ddl
FROM identifier(:info_schema_table)
WHERE PROCEDURE_SCHEMA=:schema_
AND PROCEDURE_NAME=:proc_name;
RETURN :final_ddl;
END;
This does not work during disposal
Task is running and posting status to the ToolStripStatusLabel with halt condition on disposed
Form closing contains proper waits from task to end
Inside of posting text containing the above suggest and guards for closing flag and IsDisposing/IsDisposed checks, but the "Cannot access a disposed object" exception was thrown.
const myObject = "<span style='color: red;'>apple</span>tree";
return (
<div dangerouslySetInnerHTML={{ __html: myObject }} />
);
here is a short script to save your params in a side text file. Enjoy !
The big part is error handling in file writes. If an error occurs, you won't have to quit the script editor to get rid of it.
set gcodeFile to (choose file with prompt "Source File" of type "gcode") -- get file path
set commentFile to (gcodeFile as text) & "_params.txt" --set destination file name ... roughly
set fileContent to read gcodeFile using delimiter {linefeed, return} -- read file content and split it to every paragraph
set comments to "" -- prepare to collect comments
repeat with thisLine in fileContent
if (thisLine as text) starts with ";" then set comments to comments & linefeed & (thisLine as text)
end repeat
try
set fileHandler to open for access file commentFile with write permission -- open
write comments to fileHandler -- write content
close access fileHandler -- close
on error err
close access fileHandler -- important !!
display dialog err
end try
Thank you for this post. The last block of code wont run for me in snowflake . I changed ON to WHERE.
UPDATE ToTable as t2
set val = r.total
from (
select ID,
sum(HiddenCash) + sum(Cash) + sum(income) as total
from SourceTable
group by ID
) as r
on t2.ID= r.ID;
UPDATE ToTable as t2
set val = r.total
from (
select ID,
sum(HiddenCash) + sum(Cash) + sum(income) as total
from SourceTable
group by ID
) as r
where t2.ID= r.ID;
With this .reg I log my user automatically :
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon]
"AutoAdminLogon"="1"
"DefaultUserName"="My User Name"
"DefaultPassword"="My Password"
"DefaultDomainName"="My default Domain Name"
Then I run my script as a boot task and then my wifi is connected and working. I don't use nssm anymore.
For those who might encounter my issue (even if I doubt that it can be reproduced with another config) it resolved it.
The downside is that my PC is longer to be fully operational but I don't care (<2min)
Yes, Flutter makes this pattern easy using Navigator.push
and Navigator.pop
.
Here’s a full working example:
Screen A (caller):
import 'package:flutter/material.dart';
import 'screen_b.dart'; // assume you created this separately
class ScreenA extends StatefulWidget {
@override
_ScreenAState createState() => _ScreenAState();
}
class _ScreenAState extends State<ScreenA> {
String returnedData = 'No data yet';
void _navigateAndGetData() async {
final result = await Navigator.push(
context,
MaterialPageRoute(builder: (context) => ScreenB()),
);
if (result != null) {
setState(() {
returnedData = result;
});
}
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: Text("Screen A")),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
Text('Returned data: $returnedData'),
ElevatedButton(
onPressed: _navigateAndGetData,
child: Text('Go to Screen B'),
),
],
),
),
);
}
}
Screen B (Returns Data) :
import 'package:flutter/material.dart';
class ScreenB extends StatelessWidget {
final TextEditingController _controller = TextEditingController();
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: Text("Screen B")),
body: Padding(
padding: const EdgeInsets.all(16.0),
child: Column(
children: [
TextField(controller: _controller),
SizedBox(height: 20),
ElevatedButton(
onPressed: () {
Navigator.pop(context, _controller.text); // return data
},
child: Text('Send back data'),
),
],
),
),
);
}
}
Navigator.push
returns a Future
that completes when the pushed route is popped.
In ScreenB
, Navigator.pop(context, data)
returns the data to the previous screen.
You can await
the result and use setState
to update the UI.
This is the Flutter-recommended way to pass data back when popping a route.
Navigator.push
returns a Future?In Flutter, Navigator.push()
creates a new route (i.e., a screen or a page) and adds it to the navigation stack. This is an asynchronous operation — the new screen stays on top until it's popped.
Because of this, Navigator.push()
returns a Future<T>
, where T
is the data type you expect when the screen is popped. The await
keyword lets you wait for this result without blocking the UI.
final result = await Navigator.push(...); // result gets assigned when the screen pops
Navigator.pop(context, data)
work?When you call:
Navigator.pop(context, 'some data');
You're removing the current screen from the navigation stack and sending data back to the screen below. That data becomes the result that was awaited by Navigator.push
.
Think of it like a dialog that returns a value when closed — except you're navigating entire screens.
This navigation-and-return-data pattern is especially useful in cases like:
Picking a value from a list (e.g., selecting a city or a contact).
Filling out a form and submitting it.
Performing any interaction in a secondary screen that should inform the calling screen of the result.
This works for me if I choose Save As "CSV UTF-8 (comma delimited) in Excel, and then open the stream reader in C# with ASCII.
using (var reader = new StreamReader(@fileSaved, Encoding.ASCII))
Initially, we suspected it was entirely due to Oracle client cleanup logic during Perl's global destruction phase. However, after extensive testing and valgrind analysis, we observed that the crash only occurs on systems running a specific glibc
version (2.34-125.el9_5.8
), and disappears when we upgraded to glibc-2.34-168
from RHEL 9.6 Beta
.
Resolved. When I iterate over a dataloader, it calls the Subset's getitems
, and not getitem
(the one which I had overriden). And the former call's the dataset's getitem
instead of the Subset's.
So I figured this out myself. My EventHub had a Cleanup policy of "Compact" and not "Delete". Apparently there is a requirement when pushing messages to an EventHub with "Compact" cleanup policy to have a PartitionKey included, which I was not including. The only way I found this out was the LogAnalytics table named AZMSDiagnosticErrorLogs. It had a single error repeated:
compacted event hub does not allow null message key.
There were no error messages anywhere else that I could find.
So to fix, in my Stream Analytics output settings, I included a column for the Partition key column.
In order to keep the structure that I want in resources folder, I have done like this:
In the views -> admin -> app.blade.php
:
{{ Vite::useBuildDirectory('admin')->withEntryPoints(['resources/admin/sass/app.scss', 'resources/admin/js/app.js']) }}
In the resources -> admin folder, I let it only the js + sass folders (the app itself) and in the root project I have added those two configs:
vite.admin.config.js
vite.store.config.js
vite.admin.config.js
import {defineConfig} from 'vite';
import laravel from 'laravel-vite-plugin';
import vue from '@vitejs/plugin-vue';
export default defineConfig({
plugins: [
laravel({
buildDirectory: 'admin',
input: ['resources/admin/sass/app.scss', 'resources/admin/js/app.js'],
refresh: true,
}),
vue({
template: {
transformAssetUrls: {
base: null,
includeAbsolute: false,
},
},
}),
],
resolve: {
alias: {
'@': '/resources/admin/js',
}
},
server: {
host: '0.0.0.0',
hmr: {
clientPort: 5173,
host: 'localhost',
},
fs: {
cachedChecks: false
}
}
});
and in the package.json:
...
"scripts": {
"dev.store": "vite --config vite.store.config.js",
"build.store": "vite build --config vite.store.config.js",
"dev.admin": "vite --config vite.admin.config.js",
"build.admin": "vite build --config vite.admin.config.js"
},
...
and is working, the dev server and the build :)
try
select class, guns, displacement from classes join (
select max( guns) as guns , displacement from classes group by displacenet ) maxes
on classes.guns = maxes.guns and classes.displacement = maxes.displacement
yes it solved my issue. By adding 127.0.0.1 to authorised domains on firebase and running the app on 127.0.0.1 helps me solve the issue
Same question, how to configure LDAP/AD since Airflow 3 ?
I want to scan with pytesseract, but the page numbers are not recognized. The page number is not recognized on any of the pages.
Utilizing Windows 10 and Python 3.13.3
Change this:
config = r"--psm 3" # 3
To:
config = r"--psm 6 --oem 3 -l eng"
I had the same problem, and for some strange reason, when I ran the report in SQL, it came fine, the problem was when I downloaded the report in .txt | delimited version, and what I did to solve was:
asdf plugin update ruby
this ^ worked for installing ruby 3.1.7
as well.
A simpler way using only base functions.
x<- list(Sys.Date(),Sys.Date()+1)
xx <- as.Date(as.numeric(x))
str(x)
str(xx)
Nothing is wrong with my CMakeLists.txt file. I have security software that was interfering with the correct functionality of MinGW64 on the PC where I saw this problem. When I switched to a PC without that security software, everything worked.
i know, not code, but nice music tastes
I think you don't have to use global index.scss
.
How about injecting the styles into the shadow root manually?
After creating the shadow root
const styleTag = document.createElement('style'); shadowRoot.appendChild(styleTag);
Inject SCSS (compiled CSS) into the Shadow DOM
styleTag.textContent = require('./index.scss'); // Add this line
Good luck.
Depending on your use of line-heights in your document, perhaps you could use the dimension rlh (root line-height) or dimension lh in your CSS...
*.mystyle{
line-height: calc(1rlh + 4px);
}
I had this same issue and I searched everywhere including ChatGPT but to no avail. Little did I know my kotlin version was the whole issue. I just upgraded my kotlin version from 1.8.22 to
2.1.20
and the issue is resolved now.
The authorization token in the URL returned has a lifetime of 5 minutes. You need to get a new URL for each embedding session.
Before import a project into Pycharm, I make a project with a Python Scaffolding tool for project like psp: https://github.com/MatteoGuadrini/psp
You run psp
command and answer the questions, and you onta in a complete project scaffolded.
And last import the project into Pycharm
In windows terminal navigate to Settings > Defaults > Advanced > Profile termination behavior
and set it to "Never close automatically"
The idea. Use __getattribute__()
to shadow/substitute all names you want.
There is no way for you to self-host a firebase in your own private cloud server. But you can try Supabase, which is a open-source self-hosted similar Backend as a Service (BaaS) platform to Firebase (if your question is asking about self-hosting a BaaS platform in your own private cloud).The first key difference between Supabase and Firebase is Supabase is build on top of a relational database whereas Firebase is built on top of a NoSQL document-based database.
After you've run your query, you will get access to a "Save Results" dropdown. In this dropdown you can select "CSV local". This will download only the table you've created in the query, i.e. only the columns you want.
There are many options and a good answer can only be provided with a bit more info... What is the sqlcode from db2? fail is a bit too generic... What tools do you use? Import, load, direct insert / select with federated data source? If you use a file to transport the data, then how does the file represent the null?
I don't think VS Code supports system("cls");, not sure why. Just run the .exe file and use the Windows terminal, and it works.
commands prefixed with "-" (dash) always return 0, even if the command fails.
so you can set a "-" before if you want the batch to continue on errors on this line
Thanks for the pointer on how to code this. However, the answer from @K. Nielson has a small error, so I'm posting this here for other people. Consider this MWE:
import numpy as np
from scipy import sparse
a = sparse.eye(3).tocsr()
b = b.copy().tolil()
b[1, 0] = 1e-16
b = b.tocsr()
print(f"{np.allclose(a.todense(), b.todense())=}")
# np.allclose(a.todense(), b.todense())=True
print(f"{csr_allclose(a, b)=}")
# csr_allclose(a, b)=np.False_
Here, the proposed answer gives False
. Looking more closely at the NumPy docs, there is an np.abs
too much. This works:
def csr_allclose2(a, b, rtol=1e-5, atol = 1e-8):
c = np.abs(a - b) - rtol * np.abs(b)
return c.max() <= atol
print(f"{csr_allclose2(a, b)=}")
# csr_allclose2(a, b)=np.True_
As gsl_rng_env_setup () provides the initial values (either from the env or as the lib defaults), program default values must be given after gsl_rng_env_setup (), but before gsl_rng_default, but only if the env variables are not set.
It is worth noting that M-ediff
opens up a help window easing navigation of the difference regions across documents. For instance, typing 6, and then j in the help window jumps to the 6th diff region.
Thank you, solved my problem with vagrant up ubuntu-focal64
It seems with psexec you can only pipe the output into a file, it won't show on azure's task console: https://superuser.com/questions/649550/redirect-output-of-process-started-locally-with-psexec