My webpack.config.js
const path = require('path');
const HtmlWebpackPlugin = require('html-webpack-plugin');
const { CleanWebpackPlugin } = require('clean-webpack-plugin');
const webpack = require('webpack');
module.exports = {
mode: 'development',
devtool: 'source-map' ,
context: path.resolve(__dirname, ''),
entry: './src/camera.js',
output: {
path: path.resolve(__dirname, 'dist'),
filename: 'bundle.js',
clean:true,
},
devServer: {
static: path.resolve(__dirname, ''),
port: 3000,
open: true,
hot: true,
//compress: true,
historyApiFallback: true
},
plugins:
[
new HtmlWebpackPlugin({
template: './html/camera.html',
}),
new CleanWebpackPlugin(),
],
};
consider reducing the batch size
<groupId>com.xebialabs.xldeploy</groupId>
<artifactId>xldeploy-maven-plugin</artifactId>
<version>23.1.0</version>
JDK17 to SOLVE API COMPATABILITY issue. use above version
Yes, this is expected.
Angular applies it to inline scripts as well, as part of its effort to ensure that both inline scripts and styles are handled consistently when enforcing a CSP.
This behavior ensures that Angular works in environments where a CSP is enabled, and it prevents inline scripts and styles from being blocked
This issue should be fixed, see here:
In the end the issue was that i had more than one @ConfigInitializer within my test class hierarchy. This lead to different contexts.
I had a similar problem on libsoup 3.0 and linuxmint/cjs and this worked for me. It's been 10 years but I'm leaving this here in case someone will have the same problem and find this useful.
message.connect("accept-certificate", function () {
return true
})
Try:
- Uninstalling and reinstalling Turbo C++
- Ensure you are installing the newest version.
Translated translation units and instantiation units are combined as follows:
Each translated translation unit is examined to produce a list of required instantiations.
The definitions of the required templates are located.
All the required instantiations are performed to produce instantiation units.
The program is ill-formed if any instantiation fails.
The separate translation units of a program communicate by (for example) calls to functions whose identifiers have
external linkage;
manipulation of objects whose identifiers have external linkage;
manipulation of data files.
Some or all of these translated translation units and instantiation units may be supplied from a library. Required instantiations may include instantiations which have been explicitly requested. It is implementation-defined whether the source of the translation units containing these definitions of the required templates are located is required to be available. Translation units can be separately translated and then later linked to produce an executable program.
Thus Instantiation units are similar to translated translation units, but contain no references to un-instantiated templates and no template definitions.
Given I'm doing this in C++, is it possible to define the function like this: int find(int &x):
to save on memory use?
The site has shut down. While the developer has not given a official statement, it does seem that they are no longer upkeeping the servers, hence it is shut down.
the flag has been renamed "Insecure origins treated as secure"
and now has an input box to safelist your self-signed certificate domain names
readAsBinaryString() is deprecated but I wasn't able to make it work without it.
Your un-caught exception is coming from the finally block where you try to remove the job. This happens regardless of the success in your try block.
How can we use typmod if the TYPMOD_IN function parses and returns the correct typmod, but the INPUT function always gets -1 from:
Node * coerce_type(ParseState *pstate, Node *node, Oid inputTypeId, Oid targetTypeId, int32 targetTypeMod, CoercionContext ccontext, CoercionForm cformat, int location)
which says:
/*
* For most types we pass typmod -1 to the input routine, because
* existing input routines follow implicit-coercion semantics for
* length checks, which is not always what we want here. Any length
* constraint will be applied later by our caller. An exception
* however is the INTERVAL type, for which we *must* pass the typmod
* or it won't be able to obey the bizarre SQL-spec input rules. (Ugly
* as sin, but so is this part of the spec...)
*/
if (baseTypeId == INTERVALOID)
inputTypeMod = baseTypeMod;
else
inputTypeMod = -1;
if you already have enabled AJAX under WooCommerce > Settings > Products "Enable AJAX add to cart buttons on archives," Then I would suspect a plugin conflict or perhaps cache.
Ensure cache both from the website and server, is flushed and test in Incognito mode to see if the reload is gone.
Thanks - this helped me as well for a project I am doing. Indeed I also beleave AppScript has a bug with a PositionedImage, if the the PositionedImage is inserted at the very last paragraph, it seems to duplicate the image. Adding this buffer paragraph has resolved the issue for me
I found a solution. In my case not connected cargo - homebrew So was not found openssl.
export LIBRARY_PATH="$LIBRARY_PATH:$(brew --prefix)/lib"
This command is connect cargo - homebrew.
Possible, but a bit harder than you might expect. Let me give an example in a moment.
Voximplant iOS SDK is distributed as pre-built frameworks without debug symbols in cocoapods as well as in SPM.
It does not block an iOS app distribution to AppStore/TestFlight. The "Upload Symbols Failed" message is just a warning and the app build should be uploaded successfully.
If you face with any crashes related to Voximplant SDK, feel free to contact Voximplant team.
Including the {C
special character is not working because that is the FNC1 code for the Epson printer. The special character for encoding CODE C is actually {1
.
Also, as Terry Warwick alluded to, the font setting does not affect the printed barcode. I believe what you mean by Font A, Font B, and Font C are actually Subset A, Subset B, and Subset C which you indicate you would like to use by adding the appropriate special character above. When switching subsets within a barcode string, you should also include the Shift character, {S
.
Try {APQR123X{S{11122331807110011223344
Can anyone point to documentation that unravels how Epson do font-switching in Code 128?
https://files.support.epson.com/pdf/pos/bulk/tm-i_epos-print_um_en_revk.pdf
Vertex AI requires: A health endpoint (e.g., /health) that returns a 200 OK status when the model is ready. and A prediction endpoint (e.g., /predict) that handles inference requests.
Add /health (returns 200 OK when ready) and /predict endpoints to your FastAPI app. Update your gcloud ai models upload with --container-health-route=/health --container-predict-route=/predict --container-ports=8080. Redeploy and check Cloud Logging for errors.
So i get down to the problem.
I installed Fiddler to see what happened.
My first Problem is, i get from Fiddler a different error as i get from .NET Framework.
Fiddler told me, that the Problem is the verfication of the Server certificate from the server i called.
To verify this, I added the following line of code and tested whether it works:
ServicePointManager.ServerCertificateValidationCallback += (o, c, ch, er) => true;
That worked. That means the problem is that we don't trust the authority's root certificate, or rather, the certificate isn't present in the trusted Root store.
When installing the cert into the root, the problem is solved.
Thanks for the answers.
Edit: For somebody who maybe have the same Problem - Do not use this in prod enviroment.
Just for testcase, but delete it afterwards!
You can solve this by enabling the Direction API. Currently, the only way to do this is by going to this link (because its in Legacy).
For some reason, some of your code might still be using the old Places API. You can activate it by going to this link (this is currently the only way as its in Legacy mode), which should fix your issue.
create file vite.config.ts allowedHosts: ["domain"]
A pretty simple answer to this question is
final Map json = {
"key1": {"key": [1,2,3]},
"key2": {"key": [4,5,7]},
"key3": {"key": [8,9,10]},
};
final firstEntry = json.entries.toList()[1].value;
print(firstEntry);
I wanted to accomplish this same thing, and @Veedrac got pretty close but I did not want quotes around my floats and I also wanted to be able to control the amount of precision. In order to do this I had to use the decimal library as well as the simplejson json dumps implementation (in order to get the use_decimal functionality). Hopefully this will help someone else:
from decimal import Decimal, ROUND_DOWN
import simplejson as sjson
def json_dumps_decimal(data, precision=6):
def recursive_converter(obj):
if isinstance(obj, dict):
return {key: recursive_converter(value) for key, value in obj.items()}
elif isinstance(obj, list):
return [recursive_converter(item) for item in obj]
elif isinstance(obj, float):
decimal_obj = Decimal(obj)
return decimal_obj.quantize(Decimal('1e-{0}'.format(precision)), rounding=ROUND_DOWN)
return obj
return sjson.dumps(recursive_converter(data), use_decimal=True)
Calling this is as follows will yield the following output:
data = {"dictresults": {"val1": 1000, "val2": 1000, "val3": 0.0000012}, "listresults": [0.000034, 0.0, 0.00001], 'flatresult': 0.00000123456}
jsonstr = json_dumps_decimal(data)
print(jsonstr)
{"dictresults": {"val1": 1000, "val2": 1000, "val3": 0.000001}, "listresults": [0.000033, 0.000000, 0.000010], "flatresult": 0.000001}
Based on @CouchDeveloper and your own reply, you can create/add those global function overloads possibly to keep the same ergonomics:
func autoreleasepool<Result>(_ perform: @escaping () async throws -> Result) async throws -> Result {
try await Task {
try await perform()
}.value
}
func autoreleasepool<Result>(_ perform: @escaping () async -> Result) async -> Result {
await Task {
await perform()
}.value
}
Fixed : Thanks to everyone for their help. The correct regex was
/(.*?)Player "(.*)" \(DEAD\)\ \(id=(.*) pos=<(.*)>\)\[HP: 0\] hit by Player "(.*)" \(id=(.*)\ pos=<(.*)>\) into (.*)\((.*)\) for(.*)damage (.*) with (.*) from (.*)\s*(.*) | Player "(.*)" \(DEAD\) (id=(.*)) (.*) killed by Player/
I ran the above on a larger log file and the returned data was as below
[0] => 22:09:04 | Player "GigglingCobra52" (DEAD) (id=BE2ABB8084EEC781014AC8E6B5C88A5A90F855BF pos=<1658.8, 15056.8, 451.4>)[HP: 0] hit by Player "Cogito8434" (id=26DE70CAEF00AE579AF8CA21ED3F1648DB316F1E pos=<1659.4, 14990.8, 441.4>) into Head(0) for 36.7336 damage (Bullet_308WinTracer) with M70 Tundra from 66.7822 meters
22:09:04 | Player "GigglingCobra52" (DEAD) (id=BE2ABB8084EEC781014AC8E6B5C88A5A90F855BF pos=<1658.8, 15056.8, 451.4>) killed by Player "Cogito8434" (id=26DE70CAEF00AE579AF8CA21ED3F1648DB316F1E pos=<1659.4, 14990.8, 441.4>) with M70 Tundra from 66.7822 meters
[1] => 22:12:08 | Player "Cogito8434" (DEAD) (id=26DE70CAEF00AE579AF8CA21ED3F1648DB316F1E pos=<1656.3, 15053.1, 444.8>)[HP: 0] hit by Player "GigglingCobra52" (id=BE2ABB8084EEC781014AC8E6B5C88A5A90F855BF pos=<1654.7, 15052.8, 444.8>) into Torso(12) for 29.0213 damage (Bullet_556x45) with M4-A1 from 1.57712 meters
22:12:08 | Player "Cogito8434" (DEAD) (id=26DE70CAEF00AE579AF8CA21ED3F1648DB316F1E pos=<1656.3, 15053.1, 444.8>) killed by Player "GigglingCobra52" (id=BE2ABB8084EEC781014AC8E6B5C88A5A90F855BF pos=<1654.7, 15052.8, 444.8>) with M4-A1 from 1.57712 meters
I can now use the returned array data and pull info I need. Thanks again for everyone's help in guiding me how to figure this out
I am also having the same issue, the Remote build successed but the function app is not appearing. How did you fix the issue?
how did you resolve the error mentioned above?
What could be causing this error in LibreOffice?
Your approach (directly modifying the entity within a transaction) is more efficient, simpler, and cleaner. On the other hand, your colleague’s approach is redundant in most cases, though useful when enforcing strict separation between data layers or when using immutable DTOs for business logic transformations. Why?
In Spring Boot with Hibernate, entities are managed within the persistence context when retrieved inside a transaction. Any changes made to an entity will be automatically persisted at the end of the transaction without needing an explicit save()
, thanks to dirty checking.
Your colleague's approach (entity → DTO → modify DTO → DTO → entity → save
) results in unnecessary object creation and extra processing, leading to increased CPU and memory usage.
Using EAGER fetching by default can lead to unnecessary data being loaded, especially for deeply nested objects, increasing query execution time.
Your approach with LAZY fetching ensures that only the necessary data is loaded, improving efficiency.
For API responses: DTOs help avoid exposing entity structures directly.
For projections or transformations: If the data needs to be reshaped, DTOs are a good choice.
For external system integrations: When working with external APIs, DTOs provide a stable contract.
However, for modifying and persisting entities, DTOs should not be mandatory unless:
The DTO encapsulates fields that shouldn’t be directly mapped to an entity (e.g., some computed fields).
The update process involves a separate layer of validation or business logic that should not interact directly with entities.
Your approach should be significantly faster because it avoids unnecessary object transformations and repo.save()
.
The only scenario where converting to a DTO and back might be useful is when performing complex updates involving multiple entities, ensuring that only modified fields are applied in a controlled manner.
As rightly suggested , please provide the minimal reproducible example.
However , to get you started : One of the core issue is, await page.locator().all()
returns an array of promises so If you directly iterate over jobs
, the locators resolve their properties asynchronously when accessed.
Instead of this:
const jobs = await page.locator('[data-testid="job-item-title"]').all();
I would suggest to use `.evaluateAll()` to directly fetch the job links instead of locators:
const jobs = await page.locator('[data-testid="job-item-title"]').evaluateAll(nodes => nodes.map(n => n.href));
The problem was related with the permissions of the security group that was involved. Once that was fixed I could make this work, even without the trick of the tunnel
Did you ever get this working Joshua Graham? I am running my application from as a package, therefore the wwwroot is read only and caching does not work:
no such file or directory, mkdir '/home/site/wwwroot/yyyy/standalone/apps/yyyy/.next/cache'
Adjust the typeargument in the for eahc properties panel.
As a result, I use the LayoutUpdated method and in it the command:
if (chromiumWebBrowser.IsBrowserInitialized)
{
chromiumWebBrowser.GetBrowser().GetHost().Invalidate(PaintElementType.View);
}
This gives the best result and the browser is resized correctly.
## GITHUB SOLUTION
- git config --global http.postBuffer 1048576000
- git config --global http.lowSpeedLimit 0
- git config --global http.lowSpeedTime 999999
MySQL server starts with root privilege, check /etc/passwd for root's shell. If it's /usr/local/bin/bash that doesn't exist in the file system, change it to other shell that exists.
Personally to me, workflow managers (WFM) have most value for automating repetitive tasks where adjustment of parameters or user intervention is basically never necessary. Good examples are mentioned read mapping, or sorting of files, anything that just runs some tools by default settings. Or situations where things are tedious, for example a number of intermediate files need to be created, collated, ordered in some way so downstream tools can run.
If it comes down to run a couple of one-liners on the command line then the added effort of implementing this in a WFM might be overkill.
not any more.. unless you wish to rely on community
all_links = await page.locator("locator copied from html").all()
for link in all_links:
text = await link.inner_text()
print(text)
Using "selector" not "xPath"
I got the "socket hang up" error after upgrading to .NET 8 and changing the Docker image. The problem was an incorrect port mapping in my `docker-compose.yml`. I had `ports: 8005:80`, which worked before the upgrade. However, the new .NET 8 images default to port 8080. Changing my `docker-compose.yml` to `ports: 8005:8080` fixed the issue.
**Key takeaway:** Ensure your `docker-compose.yml` port mapping matches the port your app listens on inside the container.
There were 2 things wrong:
1. I needed (foreignKeys.length > 0) && instead of foreignKeys &&
2. I needed Object.values(foreignKeys[0]).map instead of foreignKeys[0].values.map
[Note: using foreignKeys[0].values.map was a suggestion from CoPilot and it looked strange to me at the time it was suggested, but I thought to myself, "well, it's coming from an AI so it must be correct." Lesson learned!!
I know it´s a very old thread but, perhaps, my answer can help other people with the same issue.
In my case, it was hapenning because the solution was not really running. Due to inativity, IIS pool was put inert. In order to correct the things I just had to perform an access through the browser (or some tool as postman) to the application.
After that, just reload the apps and everything should work.
Try opening your workspace so that your app is the root with the <properties_folder>/launchSettings.json one level down from the app.
You can define your own custom resource and use it from context prometheus.run_gauge.labels(...).set(1)
from dagster import InitResourceContext
from dagster_prometheus import PrometheusResource
from prometheus_client import Gauge
from pydantic import PrivateAttr
class CustomPrometheusResource(PrometheusResource):
_run_gauge: Gauge = PrivateAttr(default=None)
def setup_for_execution(self, context: InitResourceContext) -> None:
super().setup_for_execution(context)
self._run_counter = Gauge(
name='dagster_run_gauge',
documentation='Status of Dagster runs',
labelnames=['job_name', 'status', 'cluster'],
registry=self._registry
)
@property
def run_gauge(self):
return self._run_gauge
it would be good to atleast have fully working published set of data .Currently some things work in one API but when response data is used for another we get 404.Its very hard to even test a full cycle in development. PRD data option is not good when for example we cant even test booking process
I was able to do with `dask.array`.
import dask.array as da
import numpy as np
coords = ...
dims = ...
var_name = 'value'
chunks = (1, 13, 36, 128, 128)
encoding = {var_name: {'chunks': chunks}}
store = 'test.zarr'
daskarray = da.empty(
(6, 13, 36, 699, 1920),
chunks=chunks,
dtype='float32',
)
daskarray[:] = np.nan
xr.DataArray(
daskarray,
coords=coords,
dims=dims,
).to_dataset(name=var_name).to_zarr(store, mode='w', encoding=encoding)
@tsegismont posted a comment that a answer for Vert.x httpClient/webClient process response chunk by chunk or as stream is still up to date and HttpClient
should be used when HTTP streaming must be connected with RecordParser. It means second solution from the question is preferred:
RecordParser parser = RecordParser.newDelimited("\n", h -> log.info("r={}", h.toString()));
client
.request(HttpMethod.GET, sut.actualPort(), "localhost", "/stream?file=stream1.txt")
.compose(HttpClientRequest::send)
.onComplete(
ar -> {
if (ar.succeeded()) {
HttpClientResponse response = ar.result();
response.handler(parser);
response.endHandler(e -> ctx.completeNow());
} else {
ctx.failNow(ar.cause());
}
});
Ideally PR for Vert.x documentation should clarify it.
Use onEndEditing
instead of onBlur
if you are dealing with multiple TextInput components.
Check the preinstalled software on the default microsoft hosted images; let's see what's on the the current windows-latest for example:
you'll notice that both WSL and Docker are installed. But there are at least 2 problems for you:
at the bottom of the page you can see that the cached docker images are all windows based
the WSL installed is version 1, incompatible with recent versions of virutalization environments
You could try to setup a step that updates WSL to v2 and then a command to switch to linux based virtualization for docker. But that's gonna cost you pipeline execution time, and given the particularly pricy costs of Azure, i'd suggest another way;
You could try and use the new windows-2025 image for this pipeline, that comes preinstalled with Docker and WSLv2 as default;
Theoretically this new image has been created specifically to resolve your problem, which is in line with Micro$oft moving towards the windows server + containerized linux services model
All right, here's a potential workaround, although I don't know if this did it or something else was responsible, so I'd like to keep the question active. For me, what I did was make a new emulator for the latest version of Android and then test it in Meerkat. Then I went back and tried the emulators in Meerkat (that I had previously made in Giraffe) and they seemed to work now. Not sure if coincidence or related, would still probably be interested in any potential feedback on this.
I've solved the issue by creating a new folder in my desktop and assigning it as my derived folder in Xcode > Settings > Locations > DerivedData and select custom and set your Path
Tadaa.... !
I found it! You had to add a condition to tell it that if it finds 2 or more identical objects, you have to delete the relationship and recreate one.
I'm closing the discussion. Have a nice day.
public function updateUser($object){
$userToChange=$object->users()->first();
if (empty($userToChange)) {
$object->users()->save(User::where('id',Auth::user()->getAuthIdentifier())->first());
}elseif($object->users()->where('user_id', $userToChange->id)->count() >= 1){ //add condition to detect if several identical objects exists
$object->users()->detach($userToChange->id);
$object->users()->save(User::where('id',Auth::user()->getAuthIdentifier())->first());
}else {
if(Auth::user()->getAuthIdentifier()!=$userToChange->id){
$object->users()->sync(User::where('id',Auth::user()->getAuthIdentifier())->first());
}
}
}
This could mainly be due to a path error that your python interpreter may be running on a separate path from where you are downloading the package. If possible i would recommend using anaconda navigator or jupyter notebook as they each offer you the way to create an environment that will have your dependencies
Ey I am facing the same error now using Saprk Structured streaming to read from kafka and sink in an Iceberg table, Did you fix it?
This is a lifesaver! Thanks
I spent an entire DAY not getting the event to fire and it was the DTR and RTS issue!
So I tried to reproduce your problem by making a Snack on expo, and I reached the same results, I'm not able to render the proper fontFamily inside the canvas, which I think is a thing of the webview inside the react-native-canvas
package. You cannot render the font family because expo-font
does not reach the webview context.
Also looking for the pull requests of the canvas repo, I saw a contributor trying to implement a Font API, but it didn't merge: https://github.com/iddan/react-native-canvas/pull/294.
Just to my conclusion, the plugin of expo-font
doesn't have the flexibility to add fonts inside webview contexts, at least doing it this way.
This is the code that I made to reproduce this issue:
import { Text, SafeAreaView, StyleSheet } from 'react-native';
import { useEffect } from 'react';
import { useFonts } from 'expo-font';
import Canvas from 'react-native-canvas';
const WaitFont = (props) => {
const [loaded, error] = useFonts({
'eightBit': require('./assets/fonts/eightBit.ttf'),
});
useEffect(() => {
if (loaded || error) {
console.log('Working?')
}
}, [loaded, error]);
if (!loaded && !error) {
return null;
}
return props.children
}
export default function App() {
const handleCanvas = (canvas) => {
if (!canvas) return;
const ctx = canvas.getContext('2d');
ctx.font = '500 26px "eightBit"';
ctx.fillText("Hello from <Canvas />", 8, 28) ;
};
return (
<SafeAreaView style={styles.container}>
<WaitFont>
<Text style={styles.paragraph}>
{`Hello from <Text />`}
</Text>
<Canvas style={styles.canvas} ref={handleCanvas} />
</WaitFont>
</SafeAreaView>
);
}
const styles = StyleSheet.create({
container: {
flex: 1,
flexDirection: 'column',
justifyContent: 'center',
backgroundColor: '#ecf0f1',
padding: 8,
},
paragraph: {
fontSize: 50,
fontWeight: '500',
fontFamily: 'eightBit'
},
canvas: {
borderWidth: 1,
borderColor: 'red',
width: '100%'
}
});
When you use an expression template and you also want a theme, you should use a template theme.
The following configuration should work:
{
"Name":"Console",
"Args":{
"formatter":{
"type":"Serilog.Templates.ExpressionTemplate, Serilog.Expressions",
"template":"{@t:HH:mm:ss.fff zzz} | {@l:u3} | {@m}\n{@x}",
"theme":"Serilog.Templates.Themes.TemplateTheme::Code, Serilog.Expressions"
}
}
}
Thanks again for this answer. I modified your regex a bit in order to work correctly with the libreoffice regex function. I added .*
twice, at the beginning and at the end of the regex.
With blah blahTESTblab la14blah-15S rebla
in cell A1
:
=REGEX(A1;"(?i)(TEST)(?:.*(\d{1,2}S)|)";"★$1$2")
gives blah blah★TEST15S rebla
=REGEX(A1;"(?i).*(TEST)(?:.*(\d{1,2}S)|).*";"★$1$2")
gives ★TEST15S
The below way is working
URL queryURL = getClass().getClassLoader().getResource("db.mongo.query/" + this.name + ".json");
String query = new String(Files.readAllBytes(Paths.get(queryURL.toURI())));
@meshack-pi This fixed for me, that was exactly my issue, dependnecy on the HttpClientModule. Thanks for your answer.
If anyone else has a similar issue, what I found worked for me is either
or
There maybe some fiddly workarounds possible with setting global TextEncoders and TextDecoders that I've seen suggested but this didn't work for me.
I have a similar problem. I haven't completely solved it yet, but I have discovered some interesting findings. Maybe this will help someone. I'm glad you found the cause in editor.autoIndent. But it still doesn't solve the problem for me. So, I'll leave the information here. In my case, the “indentNextLinePattern” is not working.
1 - It is strange that you expect a single “increaseIndentPattern” to work. My experiments and reading the information showed that it doesn't work alone. It needs a pair of “decreaseIndentPattern” to work. Even stranger is that you wrote “Then I discovered that I had ‘editor.autoIndent’ in the settings.json file: “none”, which was the problem.” - So just by enabling “editor.autoIndent”, you got the single “increaseIndentPattern” rule working? Without its “decreaseIndentPattern” pair? That's very strange.
2 - Expressions like “if (3 > 2) {” have no meaning here. The thing is that VS Code automatically adds indents if you press “enter” after an opening bracket. This works even if no separate automatic indentation rule is defined for the language. It works for brackets (, [, {.
3 - That said, in my case, the “increaseIndentPattern+decreaseIndentPattern” and “onEnterRules” options work. But “indentNextLinePattern” does not work. That's the weirdness.
4 - Neural networks help well with regex. The same Chat-GPT, Git-Hub Copilot, Perplexity, Claude. They are also good at explaining the principles and design standards of language packs. With neural nets, I was making fast progress in creating custom language highlighting. Until I got stuck with this weird problem with “indentNextLinePattern”. All in all, I created the bulk of the highlighting in about 3 days. Polishing and working out the details took me another week or so. Without neural networks, I would have been sorting it out for at least a month. And most likely, I would have quickly abandoned it because of the complexity and intricacy of the topic. Many thanks to neural networks. I love them very much.
The solution for me was to add a Height in the ContentPage definitions. For example
<ContentPage ...
HeightRequest="{OnIdiom Desktop=740,
Phone=*}">
I have the same problem, how did you resolve ?
This type of error is often linked to a Java / Gradle / AGP version incompatibility.
Can you run the following command from the root of the Flutter project to do a first check:
flutter analyze --suggestions
According to this answer: https://stackoverflow.com/a/537831/2773515, on your first linked question on your post, I think the best practices that you mentioned is clarified there, so, IMO, no, exceptions didn't manipulating anything based on exception != error "directive". Your validation class will be responsible to check and manipulate entity throwed by your custom exception which is responsible to alert your application that one business rule was violated.
Maybe you should consider not returning data at all on custom exception nor throwing an exception at all, and your validation class returning custom class with manipuated entity and an false state result.
It seems I was able to override it in my local profile with (not specifying the value):
logging.structured.format.console=
I was also looking for "disable" kind of value, but there is nothing like that.
I got claude max ai to help me solve this matter
"I've identified the issue - there's improper PowerShell code in your postgresql.conf file that's preventing PostgreSQL from starting. This happened when you tried to install pgvector.
Follow these steps to fix the problem:
Open a command prompt as Administrator
Run: notepad "C:\Program Files\PostgreSQL\17\data\postgresql.conf"
Find and delete these lines (around line 769)
param($match)
$libraries = $match.Groups[1].Value
if ([string]::IsNullOrWhiteSpace($libraries)) {
return "shared_preload_libraries = 'vector'"
} else {
return "shared_preload_libraries = '$libraries,vector'"
}
Add this line in place of the deleted code:
shared_preload_libraries = 'vector' # (change requires restart)
Save the file and close Notepad
Start the PostgreSQL service:
Using the command: net start postgresql-x64-17
Or alternatively, restart the computer
This will fix the syntax errors in your configuration file and properly enable pgvector.
@Jon Dingman You are a genius!
I am infinitely grateful to you! Your solution saved me hours of work. I searched for a solution myself for a long time, but your contribution was exactly what I needed.
I'm hitting the same wall, any progress ?
This issue has been resolved.
Thank you for your response.
yes the problem is with multiple versions of @types/react. We can just make sure the versions of @types/react are the same while running yarn why @types/react
.
SomeGuy just save my life with OpenSSH on windows. I spend almost a day trying to configure public-key auth with no success.
Then I found your post about the enconding of authorized_keys e apply it to administrators_authorized_keys on C:/ProgramData/ssh/ .
I save it on UTF-8 with vscode end finally I could login without password.
OpenSSH publickey login on windows is very obscure... even with DEBUG on the sshd service dont tell why it is denying the key.
Same here. Let me know if anyone could find a solution.
Still issue nowadays on some older printers.
In my case Honeywell PC42T Plus. I followed the idea of Levite (thanks) and used FB command followed by GB command (box) with white thick border. In my case
^GB240,50,50,W
and it works fine.
In newer versions of doxygen the \include special command has an additional option. With the option {doc} the contents of file.txt is treated as if it were directly in the source file.
// --- SourceFile.c ---
/*! \brief description
* \include{doc} file.txt
*/
// --- file.txt ---
// \details description
YOU'RE THE BEST!! This solution saved my day. Thank you very much!
Try to parameterize the Dispatcher to be configurable according to the context. If the code uses Dispatchers.Main or Dispatchers.IO, it would be advisable to allow injecting it as a parameter. In this way, a TestDispatcher can be provided in the tests, ensuring that the execution occurs at the right time.
After you typed in 'jupyter notebook' on Anaconda prompt or command line, at the bottom of the command line you will find an url like the one below. Copy paste the url, that will work.
http://localhost:8888/?token=**------------------------------------**
I haven't got the rep to respond to @Mark Brackett's answer, I had to change a .ToInt32() to .TonInt64() which stopped the OverflowException I was getting with the example code.
This line
var pCurrentSessionInfo = new IntPtr(pSessionInfo.ToInt32() + (NativeMethods.SESSION_INFO_502.SIZE_OF * i));
Becomes
var pCurrentSessionInfo = new IntPtr(pSessionInfo.ToInt64() + (NativeMethods.SESSION_INFO_502.SIZE_OF * i));
In addition to byLazy's answer. You can also do onCompletetion like this.
val f1 = flowOf(1, 2)
val f2 = flowOf(3, 4)
val f = f1.onCompletion { if(it == null) emitAll(f2) }
The repeated change detection with ngModel is a common characteristic of two-way binding with input elements. To reduce it, focus on Using ngModelChange when possible and also try Debouncing method on the date picker selection events.
I know that some compilers use key tool that makes the program have valid licenses and stuff, that may be it
I got this working using Authentication Code Flow with PKCE by:
Reading the code
hash from the URL
Calling msalInstance.acquireTokenByCode
with
In addition to jammykam's answer, this issue occurs because Sitecore doesn’t automatically remove orphaned renderings when a parent rendering with a nested placeholder is deleted. You need to clean up orphaned renderings in the layout field.
I’ve written a blog post detailing the issue and a solution - check it out here.
column += increase
This actually only increases the iterator. You Actually want to increase the element from your matrix, which would look something like this :
def change_value(my_matrix: list, increase: int):
for row in my_matrix:
for column in row:
my_matrix(row,column) += increase
return my_matrix
matrix = [[1,2,3],[4,5,6],[7,8,9]]
change_value(matrix, 3)
What is your Node version? Have you tried using the latest LTS Node.js release?
If you have a bunch of email or a PST file and you need to redact multiple email at once with an interactive editor, you can also just use our tools here: https://emailtools.hexamail.com/redact
For some users, just updating Android Studio to the newest version seems to fix it.
Simply replacing %
with %%
, as previously mentioned, did not work for me because I encountered a password authentication failed
error.
To resolve this, I encoded the URL first and then replaced %.
Here’s my final code:
_pg_password = quote_plus(os.getenv("DB_PASSWORD", "default")).replace("%", "%%")
I have finally figured out to how to solve what I wanted.
I post this here as it might help someone else in the same situation.
I now install the program with "administrative install mode", but the old software is uninstalled as the logged in user, if that user has previously installed the old software.
[Code]
function PrepareToInstall(var NeedsRestart: Boolean): string;
var
OldAppGuid, SubKeyName: string;
OldAppFound: Boolean;
ResultCode: Integer;
begin
NeedsRestart := false;
result := '';
begin
OldAppGuid := '{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}';
SubKeyName := 'SOFTWARE\WOW6432Node\Microsoft\Windows\CurrentVersion\Uninstall\' + OldAppGuid;
OldAppFound := RegKeyExists(HKEY_LOCAL_MACHINE, SubKeyName);
if not OldAppFound then
begin
SubKeyName := 'SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\' + OldAppGuid;
OldAppFound := RegKeyExists(HKEY_LOCAL_MACHINE, SubKeyName);
end;
if OldAppFound then
begin
ExecAsOriginalUser(ExpandConstant('{sys}\msiexec.exe'), // Filename
'/X ' + OldAppGuid + ' /qb- REBOOT=ReallySuppress', // Params
'', // WorkingDir
SW_SHOW, // ShowCmd
ewWaitUntilTerminated, // Wait
ResultCode); // ResultCode
end;
end;
end;
I kept getting this error when calling signtool sign in the post build event of .NET project. Turns out I simply had an older version of signtool.exe and Windows Kits. After updating the Windows SDK (to version 11), it was resolved.
Clone and scan it?
git clone https://github.com/google/flatbuffers -b v23.1.21
grype ./flatbuffers
✔ Indexed file system flatbuffers
✔ Cataloged contents e3c82e6c6bf71c090ee235f26b43aee9b40f120eb4652d8626c7cd714bead4fc
├── ✔ Packages [222 packages]
├── ✔ File digests [17 files]
├── ✔ File metadata [17 locations]
└── ✔ Executables [0 executables]
✔ Scanned for vulnerabilities [13 vulnerability matches]
├── by severity: 0 critical, 7 high, 6 medium, 0 low, 0 negligible
└── by status: 13 fixed, 0 not-fixed, 0 ignored
[0000] WARN no explicit name and version provided for directory source, deriving artifact ID from the given path (which is not ideal)
NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY
braces 3.0.2 3.0.3 npm GHSA-grv7-fg5c-xmjg High
cross-spawn 7.0.3 7.0.5 npm GHSA-3xgq-45jj-v275 High
esbuild 0.16.4 0.25.0 npm GHSA-67mh-4wv8-2f99 Medium
google.golang.org/grpc v1.35.0 1.56.3 go-module GHSA-m425-mq94-257g High
google.golang.org/grpc v1.35.0 1.56.3 go-module GHSA-qppj-fm5r-hxr3 Medium
google.golang.org/grpc v1.39.0-dev 1.56.3 go-module GHSA-m425-mq94-257g High
google.golang.org/grpc v1.39.0-dev 1.56.3 go-module GHSA-qppj-fm5r-hxr3 Medium
micromatch 4.0.5 4.0.8 npm GHSA-952p-6rrq-rcjv Medium
semver 5.6.0 5.7.2 npm GHSA-c2qf-rxjj-qqgw High
semver 7.3.7 7.5.2 npm GHSA-c2qf-rxjj-qqgw High
word-wrap 1.2.3 1.2.4 npm GHSA-j8xg-fqg3-53r7 Medium
wget https://repo1.maven.org/maven2/org/rogach/scallop_2.13/5.1.0/scallop_2.13-5.1.0-sources.jar
grype ./scallop_2.13-5.1.0-sources.jar
✔ Indexed file system ./scallop_2.13-5.1.0-sources.jar
✔ Cataloged contents 79a24a3a5c54dd926ea9b41cc1258e58e395f25141c518b1c14afb869cb0bb9d
├── ✔ Packages [1 packages]
├── ✔ File digests [1 files]
├── ✔ File metadata [1 locations]
└── ✔ Executables [0 executables]
✔ Scanned for vulnerabilities [0 vulnerability matches]
├── by severity: 0 critical, 0 high, 0 medium, 0 low, 0 negligible
└── by status: 0 fixed, 0 not-fixed, 0 ignored
No vulnerabilities found
Thanks to CDP1802 contribution. I've followed and used his code as a template. I have some constrains. It's part of an small accountant system where both income and expenses are displayed on same sheet. Therefore I couldn't test against 1 or first Row, but I said if the header/total ain't shown up within 3 rows, the I will break. My test criteria if it's a header row, all columns has to have content. My test criteria for total row is that it different from a totally empty row. Some of the rows are not completely empty outside of the "Range" of the table, therefore I a little bit stubborn :-)) and use only first column to last column of the table and offset that with a certain amount of rows like this
Private Function CorrectRangeForHeaderRows(rRng As Range) As Range
'rRng.Select
Dim tmpRng As Range
Set tmpRng = rRng.Range(Cells(1, 1), Cells(1, rRng.Columns.Count))
'tmpRng.Select
'Loop to a full Header row
Dim lCor As Long: lCor = 1
Do While WorksheetFunction.CountA(tmpRng.Offset(-lCor)) < rRng.Columns.Count
lCor = lCor + 1
If rRng.Row - lCor <= 1 Then
Exit Do
End If
Loop
If rRng.Row - lCor > 1 Then
Set rRng = rRng.Offset(-lCor).Resize(rRng.Rows.Count + lCor)
End If
'rRng.Select
CorrectRangeForHeaderRows = rRng
End Function
Private Function CorrectForTotalRow(rRng As Range) As Range
'rRng.Select
Dim tmpRng As Range
Set tmpRng = rRng.Range(Cells(1, 1), Cells(1, rRng.Columns.Count))
'tmpRng.Select
Dim lCor As Long: lCor = rRng.Rows.Count
Do While WorksheetFunction.CountA(tmpRng.Offset(lCor)) = 0
lCor = lCor + 1
If lCor > rRng.Rows.Count + 2 Then
Exit Do
End If
Loop
If lCor <= rRng.Rows.Count + 2 Then
Set rRng = rRng.Resize(lCor + 1)
End If
'rRng.Select
CorrectForTotalRow = rRng
End Function